Test Report: Docker_Linux_crio_arm64 22081

                    
                      502ebf1e50e408071a7e5daf27f82abd53674654:2025-12-09:42698
                    
                

Test fail (41/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.37
44 TestAddons/parallel/Registry 14.69
45 TestAddons/parallel/RegistryCreds 0.52
46 TestAddons/parallel/Ingress 142.6
47 TestAddons/parallel/InspektorGadget 6.27
48 TestAddons/parallel/MetricsServer 5.47
50 TestAddons/parallel/CSI 22.99
51 TestAddons/parallel/Headlamp 3.38
52 TestAddons/parallel/CloudSpanner 6.29
53 TestAddons/parallel/LocalPath 8.63
54 TestAddons/parallel/NvidiaDevicePlugin 6.27
55 TestAddons/parallel/Yakd 6.29
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.46
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.57
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.49
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.72
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.66
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.21
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.76
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.11
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.32
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.73
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.46
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 109.64
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.29
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.29
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.27
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.38
273 TestMultiControlPlane/serial/RestartSecondaryNode 509.53
293 TestJSONOutput/pause/Command 1.8
299 TestJSONOutput/unpause/Command 1.76
358 TestKubernetesUpgrade 784.17
401 TestPause/serial/Pause 8.57
482 TestStartStop/group/no-preload/serial/SecondStart 7200.077
x
+
TestAddons/serial/Volcano (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:910: skipping: crio not supported
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable volcano --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable volcano --alsologtostderr -v=1: exit status 11 (364.425546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:19:31.164012 1587394 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:19:31.164605 1587394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:31.164626 1587394 out.go:374] Setting ErrFile to fd 2...
	I1209 04:19:31.164632 1587394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:31.165027 1587394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:19:31.165614 1587394 mustload.go:66] Loading cluster: addons-377526
	I1209 04:19:31.171211 1587394 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:31.171489 1587394 addons.go:622] checking whether the cluster is paused
	I1209 04:19:31.173879 1587394 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:31.173913 1587394 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:19:31.175539 1587394 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:19:31.201942 1587394 ssh_runner.go:195] Run: systemctl --version
	I1209 04:19:31.201998 1587394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:19:31.230650 1587394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:19:31.358797 1587394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:19:31.358889 1587394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:19:31.391998 1587394 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:19:31.392022 1587394 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:19:31.392028 1587394 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:19:31.392031 1587394 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:19:31.392035 1587394 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:19:31.392039 1587394 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:19:31.392042 1587394 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:19:31.392045 1587394 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:19:31.392053 1587394 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:19:31.392060 1587394 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:19:31.392063 1587394 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:19:31.392066 1587394 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:19:31.392070 1587394 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:19:31.392073 1587394 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:19:31.392077 1587394 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:19:31.392082 1587394 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:19:31.392085 1587394 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:19:31.392090 1587394 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:19:31.392093 1587394 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:19:31.392096 1587394 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:19:31.392105 1587394 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:19:31.392109 1587394 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:19:31.392112 1587394 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:19:31.392115 1587394 cri.go:89] found id: ""
	I1209 04:19:31.392174 1587394 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:19:31.420532 1587394 out.go:203] 
	W1209 04:19:31.423628 1587394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:19:31.423662 1587394 out.go:285] * 
	* 
	W1209 04:19:31.431659 1587394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:19:31.434685 1587394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.37s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:442: registry stabilized in 12.878044ms
addons_test.go:444: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Running
addons_test.go:444: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004082209s
addons_test.go:447: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Running
addons_test.go:447: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004052576s
addons_test.go:452: (dbg) Run:  kubectl --context addons-377526 delete po -l run=registry-test --now
addons_test.go:457: (dbg) Run:  kubectl --context addons-377526 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:457: (dbg) Done: kubectl --context addons-377526 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.129345875s)
addons_test.go:471: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 ip
2025/12/09 04:19:55 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable registry --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable registry --alsologtostderr -v=1: exit status 11 (282.740304ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:19:55.343544 1588297 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:19:55.344425 1588297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:55.344467 1588297 out.go:374] Setting ErrFile to fd 2...
	I1209 04:19:55.344489 1588297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:55.345276 1588297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:19:55.345655 1588297 mustload.go:66] Loading cluster: addons-377526
	I1209 04:19:55.346089 1588297 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:55.346129 1588297 addons.go:622] checking whether the cluster is paused
	I1209 04:19:55.346277 1588297 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:55.346308 1588297 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:19:55.346921 1588297 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:19:55.363910 1588297 ssh_runner.go:195] Run: systemctl --version
	I1209 04:19:55.363977 1588297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:19:55.381921 1588297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:19:55.491158 1588297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:19:55.491283 1588297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:19:55.535792 1588297 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:19:55.535812 1588297 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:19:55.535816 1588297 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:19:55.535820 1588297 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:19:55.535824 1588297 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:19:55.535850 1588297 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:19:55.535855 1588297 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:19:55.535863 1588297 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:19:55.535866 1588297 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:19:55.535873 1588297 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:19:55.535876 1588297 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:19:55.535882 1588297 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:19:55.535891 1588297 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:19:55.535900 1588297 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:19:55.535903 1588297 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:19:55.535912 1588297 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:19:55.535915 1588297 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:19:55.535922 1588297 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:19:55.535925 1588297 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:19:55.535928 1588297 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:19:55.535932 1588297 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:19:55.535939 1588297 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:19:55.535945 1588297 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:19:55.535948 1588297 cri.go:89] found id: ""
	I1209 04:19:55.536062 1588297 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:19:55.552680 1588297 out.go:203] 
	W1209 04:19:55.556028 1588297 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:19:55.556063 1588297 out.go:285] * 
	* 
	W1209 04:19:55.566371 1588297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:19:55.569393 1588297 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.69s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:383: registry-creds stabilized in 4.204317ms
addons_test.go:385: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377526
addons_test.go:392: (dbg) Run:  kubectl --context addons-377526 -n kube-system get secret -o yaml
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (270.106562ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:33.771011 1589692 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:33.771862 1589692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:33.771885 1589692 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:33.771892 1589692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:33.772222 1589692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:33.772684 1589692 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:33.773134 1589692 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:33.773156 1589692 addons.go:622] checking whether the cluster is paused
	I1209 04:20:33.773307 1589692 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:33.773328 1589692 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:33.773986 1589692 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:33.791835 1589692 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:33.791900 1589692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:33.812953 1589692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:33.921559 1589692 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:33.921648 1589692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:33.958251 1589692 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:33.958274 1589692 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:33.958280 1589692 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:33.958284 1589692 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:33.958291 1589692 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:33.958296 1589692 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:33.958299 1589692 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:33.958302 1589692 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:33.958305 1589692 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:33.958311 1589692 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:33.958315 1589692 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:33.958318 1589692 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:33.958321 1589692 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:33.958324 1589692 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:33.958327 1589692 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:33.958332 1589692 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:33.958341 1589692 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:33.958344 1589692 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:33.958348 1589692 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:33.958351 1589692 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:33.958355 1589692 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:33.958358 1589692 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:33.958361 1589692 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:33.958365 1589692 cri.go:89] found id: ""
	I1209 04:20:33.958425 1589692 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:33.974659 1589692 out.go:203] 
	W1209 04:20:33.977696 1589692 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:33.977724 1589692 out.go:285] * 
	* 
	W1209 04:20:33.985958 1589692 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:33.988955 1589692 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (142.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:269: (dbg) Run:  kubectl --context addons-377526 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:294: (dbg) Run:  kubectl --context addons-377526 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:307: (dbg) Run:  kubectl --context addons-377526 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [8873749b-fc83-4d6b-a2fb-7d14c29d4b62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [8873749b-fc83-4d6b-a2fb-7d14c29d4b62] Running
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003712153s
I1209 04:20:16.390465 1580521 kapi.go:150] Service nginx in namespace default found.
addons_test.go:324: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:324: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.815631825s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:340: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:348: (dbg) Run:  kubectl --context addons-377526 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:353: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 ip
addons_test.go:359: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-377526
helpers_test.go:243: (dbg) docker inspect addons-377526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9",
	        "Created": "2025-12-09T04:17:16.302063351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1581901,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:17:16.363034845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/hosts",
	        "LogPath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9-json.log",
	        "Name": "/addons-377526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-377526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-377526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9",
	                "LowerDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-377526",
	                "Source": "/var/lib/docker/volumes/addons-377526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-377526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-377526",
	                "name.minikube.sigs.k8s.io": "addons-377526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0eaeaed21825edaf1bad522f7a17c86d3db0cf1e084b8a616bbc6ae11d083e3",
	            "SandboxKey": "/var/run/docker/netns/e0eaeaed2182",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-377526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:00:0f:b0:c4:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "542d04a282446d7d8563cb215ec8412fd0d13d00239eba6fd964d03646557a2d",
	                    "EndpointID": "7dd7e0b9fc86928ee481271349fa43f8523811d8ec609d6a5f0bc20f0aa26422",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-377526",
	                        "296d96ed0561"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-377526 -n addons-377526
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-377526 logs -n 25: (1.392531444s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-739882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-739882 │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ start   │ --download-only -p binary-mirror-878510 --alsologtostderr --binary-mirror http://127.0.0.1:38315 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-878510   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ delete  │ -p binary-mirror-878510                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-878510   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ addons  │ enable dashboard -p addons-377526                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ addons  │ disable dashboard -p addons-377526                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ start   │ -p addons-377526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:19 UTC │
	│ addons  │ addons-377526 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ addons-377526 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-377526 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ addons-377526 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ addons-377526 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ ip      │ addons-377526 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │ 09 Dec 25 04:19 UTC │
	│ addons  │ addons-377526 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ addons-377526 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ addons-377526 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ ssh     │ addons-377526 ssh cat /opt/local-path-provisioner/pvc-e33aa920-6724-4d1b-b3c6-a639b5fc9291_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │ 09 Dec 25 04:20 UTC │
	│ addons  │ addons-377526 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ addons  │ addons-377526 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ ssh     │ addons-377526 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ addons  │ addons-377526 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ addons  │ addons-377526 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ addons  │ addons-377526 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377526                                                                                                                                                                                                                                                                                                                                                                                           │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │ 09 Dec 25 04:20 UTC │
	│ addons  │ addons-377526 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:20 UTC │                     │
	│ ip      │ addons-377526 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:22 UTC │ 09 Dec 25 04:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:17:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:17:10.635592 1581510 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:17:10.635777 1581510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:17:10.635810 1581510 out.go:374] Setting ErrFile to fd 2...
	I1209 04:17:10.635830 1581510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:17:10.636100 1581510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:17:10.636564 1581510 out.go:368] Setting JSON to false
	I1209 04:17:10.637470 1581510 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32371,"bootTime":1765221460,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:17:10.637561 1581510 start.go:143] virtualization:  
	I1209 04:17:10.640931 1581510 out.go:179] * [addons-377526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:17:10.644866 1581510 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:17:10.645037 1581510 notify.go:221] Checking for updates...
	I1209 04:17:10.648524 1581510 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:17:10.651409 1581510 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:17:10.654319 1581510 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:17:10.657260 1581510 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:17:10.660269 1581510 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:17:10.663576 1581510 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:17:10.687427 1581510 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:17:10.687569 1581510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:17:10.749213 1581510 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:17:10.740144851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:17:10.749323 1581510 docker.go:319] overlay module found
	I1209 04:17:10.752546 1581510 out.go:179] * Using the docker driver based on user configuration
	I1209 04:17:10.755462 1581510 start.go:309] selected driver: docker
	I1209 04:17:10.755486 1581510 start.go:927] validating driver "docker" against <nil>
	I1209 04:17:10.755500 1581510 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:17:10.756220 1581510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:17:10.816998 1581510 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:17:10.802629837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:17:10.817154 1581510 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:17:10.817379 1581510 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:17:10.820211 1581510 out.go:179] * Using Docker driver with root privileges
	I1209 04:17:10.823135 1581510 cni.go:84] Creating CNI manager for ""
	I1209 04:17:10.823208 1581510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:17:10.823223 1581510 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:17:10.823304 1581510 start.go:353] cluster config:
	{Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1209 04:17:10.828224 1581510 out.go:179] * Starting "addons-377526" primary control-plane node in "addons-377526" cluster
	I1209 04:17:10.831060 1581510 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:17:10.833871 1581510 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:17:10.836657 1581510 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:17:10.836707 1581510 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 04:17:10.836721 1581510 cache.go:65] Caching tarball of preloaded images
	I1209 04:17:10.836724 1581510 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:17:10.836809 1581510 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:17:10.836821 1581510 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 04:17:10.837194 1581510 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/config.json ...
	I1209 04:17:10.837224 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/config.json: {Name:mk525736410a35602d90482be6cfa75a8128ee96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:10.856451 1581510 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:17:10.856475 1581510 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:17:10.856495 1581510 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:17:10.856526 1581510 start.go:360] acquireMachinesLock for addons-377526: {Name:mk7b7abdce6736faefe4780e4882eb58e1ac6bd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:17:10.856654 1581510 start.go:364] duration metric: took 97.01µs to acquireMachinesLock for "addons-377526"
	I1209 04:17:10.856693 1581510 start.go:93] Provisioning new machine with config: &{Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:17:10.856766 1581510 start.go:125] createHost starting for "" (driver="docker")
	I1209 04:17:10.861918 1581510 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1209 04:17:10.862162 1581510 start.go:159] libmachine.API.Create for "addons-377526" (driver="docker")
	I1209 04:17:10.862202 1581510 client.go:173] LocalClient.Create starting
	I1209 04:17:10.862329 1581510 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem
	I1209 04:17:10.952835 1581510 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem
	I1209 04:17:11.005752 1581510 cli_runner.go:164] Run: docker network inspect addons-377526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 04:17:11.022092 1581510 cli_runner.go:211] docker network inspect addons-377526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 04:17:11.022175 1581510 network_create.go:284] running [docker network inspect addons-377526] to gather additional debugging logs...
	I1209 04:17:11.022203 1581510 cli_runner.go:164] Run: docker network inspect addons-377526
	W1209 04:17:11.038306 1581510 cli_runner.go:211] docker network inspect addons-377526 returned with exit code 1
	I1209 04:17:11.038339 1581510 network_create.go:287] error running [docker network inspect addons-377526]: docker network inspect addons-377526: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-377526 not found
	I1209 04:17:11.038354 1581510 network_create.go:289] output of [docker network inspect addons-377526]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-377526 not found
	
	** /stderr **
	I1209 04:17:11.038461 1581510 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:17:11.055227 1581510 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a8ee0}
	I1209 04:17:11.055274 1581510 network_create.go:124] attempt to create docker network addons-377526 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 04:17:11.055332 1581510 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-377526 addons-377526
	I1209 04:17:11.115693 1581510 network_create.go:108] docker network addons-377526 192.168.49.0/24 created
	I1209 04:17:11.115728 1581510 kic.go:121] calculated static IP "192.168.49.2" for the "addons-377526" container
	I1209 04:17:11.115823 1581510 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 04:17:11.132890 1581510 cli_runner.go:164] Run: docker volume create addons-377526 --label name.minikube.sigs.k8s.io=addons-377526 --label created_by.minikube.sigs.k8s.io=true
	I1209 04:17:11.151462 1581510 oci.go:103] Successfully created a docker volume addons-377526
	I1209 04:17:11.151560 1581510 cli_runner.go:164] Run: docker run --rm --name addons-377526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377526 --entrypoint /usr/bin/test -v addons-377526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 04:17:12.264156 1581510 cli_runner.go:217] Completed: docker run --rm --name addons-377526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377526 --entrypoint /usr/bin/test -v addons-377526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib: (1.112539146s)
	I1209 04:17:12.264190 1581510 oci.go:107] Successfully prepared a docker volume addons-377526
	I1209 04:17:12.264230 1581510 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:17:12.264248 1581510 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 04:17:12.264312 1581510 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377526:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 04:17:16.233694 1581510 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377526:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.969342689s)
	I1209 04:17:16.233741 1581510 kic.go:203] duration metric: took 3.969487931s to extract preloaded images to volume ...
	W1209 04:17:16.233896 1581510 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 04:17:16.234033 1581510 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 04:17:16.287338 1581510 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-377526 --name addons-377526 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377526 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-377526 --network addons-377526 --ip 192.168.49.2 --volume addons-377526:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 04:17:16.566171 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Running}}
	I1209 04:17:16.584420 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:16.610127 1581510 cli_runner.go:164] Run: docker exec addons-377526 stat /var/lib/dpkg/alternatives/iptables
	I1209 04:17:16.666937 1581510 oci.go:144] the created container "addons-377526" has a running status.
	I1209 04:17:16.666965 1581510 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa...
	I1209 04:17:17.604499 1581510 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 04:17:17.623897 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:17.640944 1581510 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 04:17:17.640969 1581510 kic_runner.go:114] Args: [docker exec --privileged addons-377526 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 04:17:17.678901 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:17.695788 1581510 machine.go:94] provisionDockerMachine start ...
	I1209 04:17:17.695894 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:17.712601 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:17.712954 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:17.712971 1581510 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:17:17.713553 1581510 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50438->127.0.0.1:34240: read: connection reset by peer
	I1209 04:17:20.866085 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377526
	
	I1209 04:17:20.866118 1581510 ubuntu.go:182] provisioning hostname "addons-377526"
	I1209 04:17:20.866182 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:20.884453 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:20.884776 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:20.884787 1581510 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377526 && echo "addons-377526" | sudo tee /etc/hostname
	I1209 04:17:21.044929 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377526
	
	I1209 04:17:21.045028 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:21.063041 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:21.063367 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:21.063391 1581510 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377526/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:17:21.218967 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:17:21.219036 1581510 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:17:21.219068 1581510 ubuntu.go:190] setting up certificates
	I1209 04:17:21.219086 1581510 provision.go:84] configureAuth start
	I1209 04:17:21.219158 1581510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377526
	I1209 04:17:21.236274 1581510 provision.go:143] copyHostCerts
	I1209 04:17:21.236367 1581510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:17:21.236488 1581510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:17:21.236559 1581510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:17:21.236619 1581510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.addons-377526 san=[127.0.0.1 192.168.49.2 addons-377526 localhost minikube]
	I1209 04:17:21.622818 1581510 provision.go:177] copyRemoteCerts
	I1209 04:17:21.622892 1581510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:17:21.622935 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:21.639573 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:21.746296 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:17:21.763251 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 04:17:21.780433 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:17:21.797477 1581510 provision.go:87] duration metric: took 578.368257ms to configureAuth
	I1209 04:17:21.797508 1581510 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:17:21.797698 1581510 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:17:21.797812 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:21.814355 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:21.814705 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:21.814728 1581510 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:17:22.434769 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:17:22.434795 1581510 machine.go:97] duration metric: took 4.73898637s to provisionDockerMachine
	I1209 04:17:22.434807 1581510 client.go:176] duration metric: took 11.572593166s to LocalClient.Create
	I1209 04:17:22.434832 1581510 start.go:167] duration metric: took 11.572661212s to libmachine.API.Create "addons-377526"
	I1209 04:17:22.434843 1581510 start.go:293] postStartSetup for "addons-377526" (driver="docker")
	I1209 04:17:22.434852 1581510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:17:22.434944 1581510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:17:22.435005 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.451622 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.558554 1581510 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:17:22.561787 1581510 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:17:22.561817 1581510 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:17:22.561833 1581510 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:17:22.561897 1581510 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:17:22.561925 1581510 start.go:296] duration metric: took 127.076955ms for postStartSetup
	I1209 04:17:22.562230 1581510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377526
	I1209 04:17:22.578897 1581510 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/config.json ...
	I1209 04:17:22.579200 1581510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:17:22.579259 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.595362 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.699580 1581510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:17:22.704107 1581510 start.go:128] duration metric: took 11.847324258s to createHost
	I1209 04:17:22.704179 1581510 start.go:83] releasing machines lock for "addons-377526", held for 11.847510728s
	I1209 04:17:22.704279 1581510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377526
	I1209 04:17:22.721398 1581510 ssh_runner.go:195] Run: cat /version.json
	I1209 04:17:22.721448 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.721676 1581510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:17:22.721743 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.739696 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.750624 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.932931 1581510 ssh_runner.go:195] Run: systemctl --version
	I1209 04:17:22.939424 1581510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:17:22.980356 1581510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:17:22.984855 1581510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:17:22.984979 1581510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:17:23.017345 1581510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 04:17:23.017374 1581510 start.go:496] detecting cgroup driver to use...
	I1209 04:17:23.017409 1581510 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:17:23.017475 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:17:23.035963 1581510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:17:23.048921 1581510 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:17:23.048990 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:17:23.066802 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:17:23.085903 1581510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:17:23.207123 1581510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:17:23.325166 1581510 docker.go:234] disabling docker service ...
	I1209 04:17:23.325307 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:17:23.346866 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:17:23.360121 1581510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:17:23.473271 1581510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:17:23.581520 1581510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:17:23.594275 1581510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:17:23.607562 1581510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:17:23.607657 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.616587 1581510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:17:23.616691 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.626168 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.634723 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.643810 1581510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:17:23.652128 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.661212 1581510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.674712 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.683477 1581510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:17:23.691323 1581510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:17:23.698795 1581510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:17:23.804736 1581510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:17:23.973117 1581510 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:17:23.973253 1581510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:17:23.977141 1581510 start.go:564] Will wait 60s for crictl version
	I1209 04:17:23.977235 1581510 ssh_runner.go:195] Run: which crictl
	I1209 04:17:23.980740 1581510 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:17:24.023318 1581510 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:17:24.023502 1581510 ssh_runner.go:195] Run: crio --version
	I1209 04:17:24.056606 1581510 ssh_runner.go:195] Run: crio --version
	I1209 04:17:24.090953 1581510 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 04:17:24.093932 1581510 cli_runner.go:164] Run: docker network inspect addons-377526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:17:24.110133 1581510 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:17:24.114282 1581510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:17:24.124705 1581510 kubeadm.go:884] updating cluster {Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:17:24.124837 1581510 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:17:24.124894 1581510 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:17:24.172493 1581510 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:17:24.172518 1581510 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:17:24.172575 1581510 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:17:24.198094 1581510 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:17:24.198119 1581510 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:17:24.198128 1581510 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1209 04:17:24.198214 1581510 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-377526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:17:24.198301 1581510 ssh_runner.go:195] Run: crio config
	I1209 04:17:24.262764 1581510 cni.go:84] Creating CNI manager for ""
	I1209 04:17:24.262793 1581510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:17:24.262838 1581510 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:17:24.262868 1581510 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377526 NodeName:addons-377526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:17:24.263010 1581510 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:17:24.263087 1581510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 04:17:24.271000 1581510 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:17:24.271069 1581510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:17:24.278844 1581510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 04:17:24.292188 1581510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 04:17:24.304754 1581510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1209 04:17:24.317580 1581510 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:17:24.321227 1581510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:17:24.331065 1581510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:17:24.459901 1581510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:17:24.478161 1581510 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526 for IP: 192.168.49.2
	I1209 04:17:24.478235 1581510 certs.go:195] generating shared ca certs ...
	I1209 04:17:24.478273 1581510 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.478454 1581510 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:17:24.582108 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt ...
	I1209 04:17:24.582139 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt: {Name:mk3a1918fa927ff9d32540da018f7eefbfc4b54b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.582340 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key ...
	I1209 04:17:24.582355 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key: {Name:mk464efebeae6480718a4aefc3e662e3af96267f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.582468 1581510 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:17:24.732777 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt ...
	I1209 04:17:24.732808 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt: {Name:mkf78f9fc0de3e89a151cf75e195ecd99b1990fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.732984 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key ...
	I1209 04:17:24.732999 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key: {Name:mk0ad0fe979156209211c3c09aef76eb323713c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.733080 1581510 certs.go:257] generating profile certs ...
	I1209 04:17:24.733147 1581510 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.key
	I1209 04:17:24.733169 1581510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt with IP's: []
	I1209 04:17:25.010284 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt ...
	I1209 04:17:25.010323 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: {Name:mkc865236fad47470fd38078b1a8f35f9a1112a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.010524 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.key ...
	I1209 04:17:25.010538 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.key: {Name:mkca0decbe07d7184d011490eceb71932eccdd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.010648 1581510 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b
	I1209 04:17:25.010675 1581510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 04:17:25.348310 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b ...
	I1209 04:17:25.348344 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b: {Name:mk9672d7335ff226422c27cf73be60bb20f6b19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.348523 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b ...
	I1209 04:17:25.348541 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b: {Name:mkf8b61c27c6b290b98491b354fe8bf17c07e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.348617 1581510 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt
	I1209 04:17:25.348716 1581510 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key
	I1209 04:17:25.348774 1581510 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key
	I1209 04:17:25.348797 1581510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt with IP's: []
	I1209 04:17:25.502270 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt ...
	I1209 04:17:25.502301 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt: {Name:mk6a935916017e206f3bcc29fe39cbf396348f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.502495 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key ...
	I1209 04:17:25.502532 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key: {Name:mkf5ed0c32959705b0f222b7088fadea8a48b8e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.502745 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:17:25.502792 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:17:25.502824 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:17:25.502860 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:17:25.503436 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:17:25.524980 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:17:25.543194 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:17:25.560875 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:17:25.578702 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 04:17:25.596972 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:17:25.614666 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:17:25.632620 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:17:25.650560 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:17:25.668671 1581510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:17:25.681305 1581510 ssh_runner.go:195] Run: openssl version
	I1209 04:17:25.687727 1581510 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.695727 1581510 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:17:25.703405 1581510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.707256 1581510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.707329 1581510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.753764 1581510 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:17:25.761498 1581510 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 04:17:25.769034 1581510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:17:25.772900 1581510 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 04:17:25.772953 1581510 kubeadm.go:401] StartCluster: {Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:17:25.773031 1581510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:17:25.773107 1581510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:17:25.802926 1581510 cri.go:89] found id: ""
	I1209 04:17:25.803043 1581510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:17:25.811233 1581510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:17:25.819148 1581510 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:17:25.819214 1581510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:17:25.826947 1581510 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:17:25.826969 1581510 kubeadm.go:158] found existing configuration files:
	
	I1209 04:17:25.827049 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 04:17:25.835323 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:17:25.835419 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:17:25.844218 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 04:17:25.852387 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:17:25.852486 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:17:25.860356 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 04:17:25.868441 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:17:25.868540 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:17:25.876243 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 04:17:25.884510 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:17:25.884602 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:17:25.892444 1581510 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:17:25.930192 1581510 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 04:17:25.930501 1581510 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:17:25.955176 1581510 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:17:25.955252 1581510 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:17:25.955293 1581510 kubeadm.go:319] OS: Linux
	I1209 04:17:25.955345 1581510 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:17:25.955398 1581510 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:17:25.955449 1581510 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:17:25.955501 1581510 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:17:25.955552 1581510 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:17:25.955604 1581510 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:17:25.955655 1581510 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:17:25.955711 1581510 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:17:25.955761 1581510 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:17:26.036977 1581510 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:17:26.037100 1581510 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:17:26.037198 1581510 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:17:26.047029 1581510 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:17:26.050852 1581510 out.go:252]   - Generating certificates and keys ...
	I1209 04:17:26.050952 1581510 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:17:26.051025 1581510 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:17:26.320103 1581510 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 04:17:26.846351 1581510 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 04:17:27.470021 1581510 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 04:17:27.921121 1581510 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 04:17:28.611524 1581510 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 04:17:28.611672 1581510 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:17:29.386222 1581510 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 04:17:29.386597 1581510 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:17:29.851314 1581510 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 04:17:30.444407 1581510 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 04:17:30.699323 1581510 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 04:17:30.699608 1581510 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:17:31.020651 1581510 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:17:31.282136 1581510 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:17:31.538792 1581510 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:17:31.924035 1581510 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:17:32.066703 1581510 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:17:32.067461 1581510 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:17:32.070283 1581510 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:17:32.073773 1581510 out.go:252]   - Booting up control plane ...
	I1209 04:17:32.073894 1581510 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:17:32.083212 1581510 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:17:32.083294 1581510 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:17:32.101131 1581510 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:17:32.101256 1581510 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:17:32.109804 1581510 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:17:32.110195 1581510 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:17:32.110427 1581510 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:17:32.248104 1581510 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:17:32.248238 1581510 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:17:33.248504 1581510 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000859879s
	I1209 04:17:33.253293 1581510 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 04:17:33.253394 1581510 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1209 04:17:33.253483 1581510 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 04:17:33.253566 1581510 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 04:17:36.794701 1581510 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.541014639s
	I1209 04:17:38.484222 1581510 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.230843404s
	I1209 04:17:40.256090 1581510 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002637701s
	I1209 04:17:40.290679 1581510 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 04:17:40.316217 1581510 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 04:17:40.357821 1581510 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 04:17:40.358297 1581510 kubeadm.go:319] [mark-control-plane] Marking the node addons-377526 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 04:17:40.371078 1581510 kubeadm.go:319] [bootstrap-token] Using token: lnu59a.k8lvbqwoiryzooup
	I1209 04:17:40.374022 1581510 out.go:252]   - Configuring RBAC rules ...
	I1209 04:17:40.374146 1581510 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 04:17:40.379060 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 04:17:40.389225 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 04:17:40.393812 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 04:17:40.398210 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 04:17:40.402763 1581510 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 04:17:40.664334 1581510 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 04:17:41.107274 1581510 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 04:17:41.663073 1581510 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 04:17:41.664072 1581510 kubeadm.go:319] 
	I1209 04:17:41.664144 1581510 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 04:17:41.664149 1581510 kubeadm.go:319] 
	I1209 04:17:41.664226 1581510 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 04:17:41.664231 1581510 kubeadm.go:319] 
	I1209 04:17:41.664261 1581510 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 04:17:41.664332 1581510 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 04:17:41.664383 1581510 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 04:17:41.664387 1581510 kubeadm.go:319] 
	I1209 04:17:41.664440 1581510 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 04:17:41.664444 1581510 kubeadm.go:319] 
	I1209 04:17:41.664491 1581510 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 04:17:41.664495 1581510 kubeadm.go:319] 
	I1209 04:17:41.664548 1581510 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 04:17:41.664623 1581510 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 04:17:41.664699 1581510 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 04:17:41.664704 1581510 kubeadm.go:319] 
	I1209 04:17:41.664788 1581510 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 04:17:41.664864 1581510 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 04:17:41.664868 1581510 kubeadm.go:319] 
	I1209 04:17:41.664954 1581510 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lnu59a.k8lvbqwoiryzooup \
	I1209 04:17:41.665057 1581510 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb \
	I1209 04:17:41.665077 1581510 kubeadm.go:319] 	--control-plane 
	I1209 04:17:41.665081 1581510 kubeadm.go:319] 
	I1209 04:17:41.665166 1581510 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 04:17:41.665170 1581510 kubeadm.go:319] 
	I1209 04:17:41.665252 1581510 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lnu59a.k8lvbqwoiryzooup \
	I1209 04:17:41.665354 1581510 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb 
	I1209 04:17:41.668049 1581510 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1209 04:17:41.668268 1581510 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:17:41.668371 1581510 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:17:41.668392 1581510 cni.go:84] Creating CNI manager for ""
	I1209 04:17:41.668400 1581510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:17:41.671506 1581510 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 04:17:41.674449 1581510 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 04:17:41.678300 1581510 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 04:17:41.678322 1581510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 04:17:41.692999 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 04:17:41.980288 1581510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 04:17:41.980443 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:41.980507 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377526 minikube.k8s.io/updated_at=2025_12_09T04_17_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-377526 minikube.k8s.io/primary=true
	I1209 04:17:42.250623 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:42.250757 1581510 ops.go:34] apiserver oom_adj: -16
	I1209 04:17:42.751378 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:43.250725 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:43.751149 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:44.251583 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:44.751376 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:45.250827 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:45.750890 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:45.842157 1581510 kubeadm.go:1114] duration metric: took 3.86177558s to wait for elevateKubeSystemPrivileges
	I1209 04:17:45.842205 1581510 kubeadm.go:403] duration metric: took 20.069257415s to StartCluster
	I1209 04:17:45.842226 1581510 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:45.842356 1581510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:17:45.842782 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:45.842989 1581510 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:17:45.843098 1581510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 04:17:45.843365 1581510 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:17:45.843401 1581510 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 04:17:45.843480 1581510 addons.go:70] Setting yakd=true in profile "addons-377526"
	I1209 04:17:45.843502 1581510 addons.go:239] Setting addon yakd=true in "addons-377526"
	I1209 04:17:45.843524 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.844010 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.844260 1581510 addons.go:70] Setting inspektor-gadget=true in profile "addons-377526"
	I1209 04:17:45.844282 1581510 addons.go:239] Setting addon inspektor-gadget=true in "addons-377526"
	I1209 04:17:45.844304 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.844737 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.845113 1581510 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377526"
	I1209 04:17:45.845137 1581510 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377526"
	I1209 04:17:45.845160 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.845583 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.846522 1581510 addons.go:70] Setting metrics-server=true in profile "addons-377526"
	I1209 04:17:45.846551 1581510 addons.go:239] Setting addon metrics-server=true in "addons-377526"
	I1209 04:17:45.846594 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.847015 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.853508 1581510 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377526"
	I1209 04:17:45.853544 1581510 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377526"
	I1209 04:17:45.853583 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.854089 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.857511 1581510 addons.go:70] Setting cloud-spanner=true in profile "addons-377526"
	I1209 04:17:45.857583 1581510 addons.go:239] Setting addon cloud-spanner=true in "addons-377526"
	I1209 04:17:45.857641 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.860128 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.863781 1581510 addons.go:70] Setting registry=true in profile "addons-377526"
	I1209 04:17:45.863822 1581510 addons.go:239] Setting addon registry=true in "addons-377526"
	I1209 04:17:45.863874 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.864334 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.869965 1581510 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377526"
	I1209 04:17:45.870036 1581510 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377526"
	I1209 04:17:45.870067 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.870520 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.882350 1581510 addons.go:70] Setting registry-creds=true in profile "addons-377526"
	I1209 04:17:45.882404 1581510 addons.go:239] Setting addon registry-creds=true in "addons-377526"
	I1209 04:17:45.882448 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.882959 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.886707 1581510 addons.go:70] Setting default-storageclass=true in profile "addons-377526"
	I1209 04:17:45.886795 1581510 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377526"
	I1209 04:17:45.887197 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.898032 1581510 addons.go:70] Setting gcp-auth=true in profile "addons-377526"
	I1209 04:17:45.898081 1581510 mustload.go:66] Loading cluster: addons-377526
	I1209 04:17:45.898287 1581510 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:17:45.898621 1581510 addons.go:70] Setting storage-provisioner=true in profile "addons-377526"
	I1209 04:17:45.898642 1581510 addons.go:239] Setting addon storage-provisioner=true in "addons-377526"
	I1209 04:17:45.898669 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.899043 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.904285 1581510 addons.go:70] Setting ingress=true in profile "addons-377526"
	I1209 04:17:45.904375 1581510 addons.go:239] Setting addon ingress=true in "addons-377526"
	I1209 04:17:45.904448 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.904967 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.908213 1581510 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377526"
	I1209 04:17:45.908253 1581510 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377526"
	I1209 04:17:45.908584 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.918401 1581510 addons.go:70] Setting ingress-dns=true in profile "addons-377526"
	I1209 04:17:45.918436 1581510 addons.go:239] Setting addon ingress-dns=true in "addons-377526"
	I1209 04:17:45.918478 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.918978 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.920595 1581510 addons.go:70] Setting volcano=true in profile "addons-377526"
	I1209 04:17:45.920622 1581510 addons.go:239] Setting addon volcano=true in "addons-377526"
	I1209 04:17:45.920666 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.921120 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.933267 1581510 out.go:179] * Verifying Kubernetes components...
	I1209 04:17:45.937532 1581510 addons.go:70] Setting volumesnapshots=true in profile "addons-377526"
	I1209 04:17:45.937573 1581510 addons.go:239] Setting addon volumesnapshots=true in "addons-377526"
	I1209 04:17:45.937608 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.938082 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.973528 1581510 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1209 04:17:45.976489 1581510 out.go:179]   - Using image docker.io/registry:3.0.0
	I1209 04:17:45.979466 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 04:17:45.979496 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 04:17:45.979565 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:45.991147 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:46.034012 1581510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:17:46.056611 1581510 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377526"
	I1209 04:17:46.056723 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:46.057058 1581510 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 04:17:46.062190 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 04:17:46.062266 1581510 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 04:17:46.062406 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.085068 1581510 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1209 04:17:46.095049 1581510 addons.go:239] Setting addon default-storageclass=true in "addons-377526"
	I1209 04:17:46.095157 1581510 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:17:46.095169 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:46.095997 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:46.127766 1581510 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 04:17:46.132098 1581510 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 04:17:46.132122 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 04:17:46.132189 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.132840 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 04:17:46.136233 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1209 04:17:46.142175 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 04:17:46.147186 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:46.147963 1581510 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1209 04:17:46.148255 1581510 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 04:17:46.148297 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 04:17:46.148383 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.148672 1581510 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1209 04:17:46.148936 1581510 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 04:17:46.148948 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1209 04:17:46.149001 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.175823 1581510 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1209 04:17:46.180777 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 04:17:46.180838 1581510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 04:17:46.180921 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.194967 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 04:17:46.194987 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1209 04:17:46.195052 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.202257 1581510 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1209 04:17:46.206706 1581510 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:17:46.206738 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:17:46.206812 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.212661 1581510 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 04:17:46.212684 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 04:17:46.212765 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.259327 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	W1209 04:17:46.260296 1581510 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 04:17:46.263621 1581510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 04:17:46.263792 1581510 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1209 04:17:46.263802 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 04:17:46.263891 1581510 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1209 04:17:46.264429 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:46.266001 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 04:17:46.266293 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.292449 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 04:17:46.292506 1581510 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 04:17:46.292618 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.266314 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 04:17:46.266544 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.301358 1581510 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 04:17:46.301389 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1209 04:17:46.301446 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.328918 1581510 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 04:17:46.332420 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 04:17:46.341380 1581510 out.go:179]   - Using image docker.io/busybox:stable
	I1209 04:17:46.376206 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.377259 1581510 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 04:17:46.377278 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 04:17:46.377362 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.375798 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 04:17:46.383751 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 04:17:46.384321 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.392554 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 04:17:46.397668 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 04:17:46.402639 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 04:17:46.406516 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.412009 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 04:17:46.414800 1581510 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:17:46.414822 1581510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:17:46.414900 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.417567 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 04:17:46.417597 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 04:17:46.417677 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.430912 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.434909 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.440552 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.455089 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.495356 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.518380 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.524472 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.530839 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.533805 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	W1209 04:17:46.536197 1581510 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 04:17:46.536237 1581510 retry.go:31] will retry after 286.932654ms: ssh: handshake failed: EOF
	I1209 04:17:46.577445 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.583554 1581510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1209 04:17:46.828880 1581510 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 04:17:46.828909 1581510 retry.go:31] will retry after 224.110959ms: ssh: handshake failed: EOF
	I1209 04:17:47.011009 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 04:17:47.011039 1581510 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 04:17:47.157344 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 04:17:47.183740 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 04:17:47.183803 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 04:17:47.224248 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 04:17:47.233916 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:17:47.316756 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 04:17:47.320991 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:17:47.367965 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 04:17:47.369483 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 04:17:47.369520 1581510 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 04:17:47.372747 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 04:17:47.378054 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 04:17:47.400196 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 04:17:47.537859 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 04:17:47.537930 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 04:17:47.609350 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 04:17:47.616231 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 04:17:47.616314 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 04:17:47.619193 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 04:17:47.619265 1581510 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 04:17:47.679327 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 04:17:47.727897 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 04:17:47.727964 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 04:17:47.774140 1581510 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 04:17:47.774215 1581510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 04:17:47.911542 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 04:17:47.911615 1581510 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 04:17:47.925806 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 04:17:47.925872 1581510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 04:17:47.927211 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 04:17:47.927266 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 04:17:47.942304 1581510 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 04:17:47.942376 1581510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 04:17:48.106131 1581510 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 04:17:48.106209 1581510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 04:17:48.125039 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 04:17:48.125105 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 04:17:48.288587 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 04:17:48.288668 1581510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 04:17:48.338700 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 04:17:48.372149 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 04:17:48.372230 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 04:17:48.384794 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 04:17:48.384858 1581510 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 04:17:48.450383 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 04:17:48.603562 1581510 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 04:17:48.603629 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 04:17:48.641377 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 04:17:48.641460 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 04:17:48.707085 1581510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.443431045s)
	I1209 04:17:48.707173 1581510 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.123551536s)
	I1209 04:17:48.707976 1581510 node_ready.go:35] waiting up to 6m0s for node "addons-377526" to be "Ready" ...
	I1209 04:17:48.708204 1581510 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 04:17:48.854245 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 04:17:48.854307 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 04:17:48.982680 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 04:17:49.102498 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 04:17:49.102566 1581510 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 04:17:49.215865 1581510 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377526" context rescaled to 1 replicas
	I1209 04:17:49.253147 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 04:17:49.253168 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 04:17:49.404067 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 04:17:49.404087 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 04:17:49.570071 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 04:17:49.570139 1581510 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 04:17:49.781579 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1209 04:17:50.722197 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:51.090793 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.933371324s)
	I1209 04:17:52.401286 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.176961955s)
	I1209 04:17:52.401361 1581510 addons.go:495] Verifying addon ingress=true in "addons-377526"
	I1209 04:17:52.401544 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.167597423s)
	I1209 04:17:52.401838 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.085047294s)
	I1209 04:17:52.401864 1581510 addons.go:495] Verifying addon registry=true in "addons-377526"
	I1209 04:17:52.401893 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.080872992s)
	I1209 04:17:52.401947 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.033957564s)
	I1209 04:17:52.401980 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.029210399s)
	I1209 04:17:52.402021 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.023936697s)
	I1209 04:17:52.402066 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.001847247s)
	I1209 04:17:52.402257 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.79283311s)
	I1209 04:17:52.402306 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.722907706s)
	I1209 04:17:52.402370 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.063602113s)
	I1209 04:17:52.402474 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.952017777s)
	I1209 04:17:52.402489 1581510 addons.go:495] Verifying addon metrics-server=true in "addons-377526"
	I1209 04:17:52.402565 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.419815962s)
	W1209 04:17:52.402605 1581510 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 04:17:52.402628 1581510 retry.go:31] will retry after 208.74496ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 04:17:52.404912 1581510 out.go:179] * Verifying registry addon...
	I1209 04:17:52.405015 1581510 out.go:179] * Verifying ingress addon...
	I1209 04:17:52.406956 1581510 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377526 service yakd-dashboard -n yakd-dashboard
	
	I1209 04:17:52.409558 1581510 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 04:17:52.409558 1581510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 04:17:52.420726 1581510 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 04:17:52.420751 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:52.423975 1581510 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 04:17:52.423999 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:17:52.436947 1581510 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 04:17:52.612044 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 04:17:52.837647 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.055975014s)
	I1209 04:17:52.837738 1581510 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-377526"
	I1209 04:17:52.842853 1581510 out.go:179] * Verifying csi-hostpath-driver addon...
	I1209 04:17:52.846656 1581510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 04:17:52.853000 1581510 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 04:17:52.853027 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:52.914829 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:52.915216 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:17:53.211662 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:53.351223 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:53.414969 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:53.415352 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:53.850470 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:53.912777 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:53.913232 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:54.037411 1581510 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 04:17:54.037502 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:54.057952 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:54.187237 1581510 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 04:17:54.199896 1581510 addons.go:239] Setting addon gcp-auth=true in "addons-377526"
	I1209 04:17:54.199948 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:54.200404 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:54.219781 1581510 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 04:17:54.219842 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:54.236843 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:54.341383 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 04:17:54.344176 1581510 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 04:17:54.347015 1581510 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 04:17:54.347036 1581510 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 04:17:54.351013 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:54.363041 1581510 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 04:17:54.363065 1581510 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 04:17:54.376513 1581510 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 04:17:54.376535 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 04:17:54.389338 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 04:17:54.415421 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:54.415906 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:54.857959 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:54.881296 1581510 addons.go:495] Verifying addon gcp-auth=true in "addons-377526"
	I1209 04:17:54.883527 1581510 out.go:179] * Verifying gcp-auth addon...
	I1209 04:17:54.887487 1581510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 04:17:54.953948 1581510 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 04:17:54.953976 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:54.954096 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:54.954319 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:55.350710 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:55.390563 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:55.413063 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:55.413310 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:17:55.711463 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:55.850556 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:55.890618 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:55.913593 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:55.913743 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:56.350317 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:56.391204 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:56.413520 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:56.414005 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:56.849842 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:56.891019 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:56.913491 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:56.913571 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:57.350141 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:57.391211 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:57.413255 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:57.413499 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:57.851021 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:57.890911 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:57.913335 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:57.914356 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:17:58.211554 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:58.349623 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:58.390299 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:58.413678 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:58.413971 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:58.850847 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:58.891082 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:58.913349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:58.914017 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:59.350736 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:59.390713 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:59.412583 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:59.413242 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:59.850529 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:59.890611 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:59.913755 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:59.914326 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:18:00.221705 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:00.351772 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:00.391355 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:00.413973 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:00.414093 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:00.851255 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:00.891047 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:00.913068 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:00.913295 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:01.350378 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:01.400078 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:01.420708 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:01.421398 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:01.849909 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:01.890877 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:01.913245 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:01.913792 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:02.350617 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:02.390826 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:02.412771 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:02.413098 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:02.711015 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:02.850549 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:02.891187 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:02.913316 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:02.914126 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:03.350360 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:03.391235 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:03.413096 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:03.413667 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:03.849594 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:03.891414 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:03.913524 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:03.913726 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:04.350019 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:04.390870 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:04.413083 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:04.413361 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:18:04.711530 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:04.850683 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:04.890403 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:04.913821 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:04.913962 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:05.350727 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:05.390408 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:05.413575 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:05.414061 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:05.851464 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:05.891350 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:05.914954 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:05.915514 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:06.350307 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:06.391652 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:06.413540 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:06.413958 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:06.850218 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:06.891619 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:06.913395 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:06.913761 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:07.210858 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:07.350121 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:07.390930 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:07.412884 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:07.413860 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:07.849997 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:07.890534 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:07.912606 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:07.912748 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:08.350017 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:08.390761 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:08.412741 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:08.413106 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:08.849895 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:08.890909 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:08.913661 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:08.913930 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:09.211158 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:09.350230 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:09.391238 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:09.413196 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:09.413356 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:09.850984 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:09.890685 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:09.913295 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:09.913450 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:10.350664 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:10.390598 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:10.413695 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:10.414053 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:10.850490 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:10.890479 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:10.913582 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:10.913764 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:11.350691 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:11.390621 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:11.413854 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:11.414091 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:11.711260 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:11.852130 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:11.891217 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:11.913073 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:11.913228 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:12.350324 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:12.391131 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:12.413197 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:12.413341 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:12.850256 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:12.891140 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:12.913556 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:12.913834 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:13.350233 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:13.391232 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:13.413644 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:13.413906 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:13.850875 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:13.890430 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:13.913885 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:13.913958 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:18:14.211095 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:14.350034 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:14.391055 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:14.413235 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:14.413348 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:14.849979 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:14.890805 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:14.912822 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:14.912958 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:15.350112 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:15.390917 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:15.413104 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:15.413503 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:15.851707 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:15.890863 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:15.913465 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:15.913582 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:16.350137 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:16.391007 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:16.413225 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:16.413330 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:16.711236 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:16.851058 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:16.890707 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:16.912661 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:16.913021 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:17.349906 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:17.390680 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:17.412773 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:17.413061 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:17.851066 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:17.890788 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:17.912820 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:17.913071 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:18.349752 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:18.390611 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:18.413689 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:18.413895 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:18.850844 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:18.891042 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:18.913081 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:18.913512 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:19.212157 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:19.350590 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:19.391046 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:19.414766 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:19.414996 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:19.849911 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:19.891342 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:19.914454 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:19.914986 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:20.349650 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:20.390693 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:20.412521 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:20.413021 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:20.851389 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:20.891342 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:20.913530 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:20.913723 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:21.350285 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:21.391203 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:21.413567 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:21.413946 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:21.710776 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:21.850393 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:21.891246 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:21.913128 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:21.913321 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:22.349798 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:22.390827 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:22.413271 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:22.413643 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:22.849642 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:22.890816 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:22.913177 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:22.913254 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:23.350185 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:23.390893 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:23.413013 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:23.413190 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:23.711175 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:23.850795 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:23.890684 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:23.912662 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:23.912880 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:24.350375 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:24.391490 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:24.413382 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:24.413733 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:24.849801 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:24.890413 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:24.914022 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:24.914299 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:25.349965 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:25.390781 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:25.412960 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:25.413283 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:25.711656 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:25.849982 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:25.890949 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:25.913336 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:25.913392 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:26.350050 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:26.391093 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:26.413283 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:26.413739 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:26.849713 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:26.893549 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:26.912582 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:26.913000 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:27.349756 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:27.390759 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:27.412562 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:27.412819 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:27.849508 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:27.890340 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:27.913690 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:27.913755 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:28.228189 1581510 node_ready.go:49] node "addons-377526" is "Ready"
	I1209 04:18:28.228240 1581510 node_ready.go:38] duration metric: took 39.520237725s for node "addons-377526" to be "Ready" ...
	I1209 04:18:28.228269 1581510 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:18:28.228335 1581510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:18:28.244351 1581510 api_server.go:72] duration metric: took 42.401322922s to wait for apiserver process to appear ...
	I1209 04:18:28.244378 1581510 api_server.go:88] waiting for apiserver healthz status ...
	I1209 04:18:28.244398 1581510 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 04:18:28.259307 1581510 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 04:18:28.260868 1581510 api_server.go:141] control plane version: v1.34.2
	I1209 04:18:28.260899 1581510 api_server.go:131] duration metric: took 16.514257ms to wait for apiserver health ...
	I1209 04:18:28.260921 1581510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 04:18:28.272678 1581510 system_pods.go:59] 19 kube-system pods found
	I1209 04:18:28.272766 1581510 system_pods.go:61] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending
	I1209 04:18:28.272787 1581510 system_pods.go:61] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:28.272808 1581510 system_pods.go:61] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending
	I1209 04:18:28.272840 1581510 system_pods.go:61] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending
	I1209 04:18:28.272865 1581510 system_pods.go:61] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:28.272885 1581510 system_pods.go:61] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:28.272906 1581510 system_pods.go:61] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:28.272926 1581510 system_pods.go:61] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:28.272954 1581510 system_pods.go:61] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:28.272978 1581510 system_pods.go:61] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:28.272998 1581510 system_pods.go:61] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:28.273022 1581510 system_pods.go:61] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:28.273041 1581510 system_pods.go:61] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending
	I1209 04:18:28.273077 1581510 system_pods.go:61] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending
	I1209 04:18:28.273096 1581510 system_pods.go:61] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending
	I1209 04:18:28.273115 1581510 system_pods.go:61] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending
	I1209 04:18:28.273135 1581510 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending
	I1209 04:18:28.273166 1581510 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending
	I1209 04:18:28.273191 1581510 system_pods.go:61] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending
	I1209 04:18:28.273212 1581510 system_pods.go:74] duration metric: took 12.284274ms to wait for pod list to return data ...
	I1209 04:18:28.273234 1581510 default_sa.go:34] waiting for default service account to be created ...
	I1209 04:18:28.314073 1581510 default_sa.go:45] found service account: "default"
	I1209 04:18:28.314111 1581510 default_sa.go:55] duration metric: took 40.856873ms for default service account to be created ...
	I1209 04:18:28.314123 1581510 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 04:18:28.452461 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:28.452717 1581510 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 04:18:28.452796 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:28.453815 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:28.454847 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:28.454911 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending
	I1209 04:18:28.454932 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:28.454953 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending
	I1209 04:18:28.454993 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending
	I1209 04:18:28.455018 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:28.455041 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:28.455080 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:28.455104 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:28.455127 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:28.455167 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:28.455194 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:28.455219 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:28.455256 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending
	I1209 04:18:28.455286 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:28.455307 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending
	I1209 04:18:28.455349 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending
	I1209 04:18:28.455371 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending
	I1209 04:18:28.455395 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending
	I1209 04:18:28.455446 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending
	I1209 04:18:28.455490 1581510 retry.go:31] will retry after 227.613254ms: missing components: kube-dns
	I1209 04:18:28.455951 1581510 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 04:18:28.456001 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:28.688540 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:28.688640 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 04:18:28.688676 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:28.688724 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending
	I1209 04:18:28.688743 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending
	I1209 04:18:28.688779 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:28.688802 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:28.688822 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:28.688858 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:28.688883 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:28.688912 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:28.688964 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:28.688997 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:28.689045 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending
	I1209 04:18:28.689069 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:28.689090 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 04:18:28.689125 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending
	I1209 04:18:28.689151 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending
	I1209 04:18:28.689171 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending
	I1209 04:18:28.689213 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 04:18:28.689252 1581510 retry.go:31] will retry after 309.414535ms: missing components: kube-dns
	I1209 04:18:28.864133 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:28.938349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:29.020637 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:29.024271 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:29.107928 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:29.108015 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 04:18:29.108038 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:29.108079 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 04:18:29.108104 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 04:18:29.108123 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:29.108159 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:29.108184 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:29.108205 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:29.108243 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:29.108267 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:29.108288 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:29.108329 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:29.108356 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 04:18:29.108380 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:29.108418 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 04:18:29.108449 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 04:18:29.108471 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.108507 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.108535 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 04:18:29.108579 1581510 retry.go:31] will retry after 341.198674ms: missing components: kube-dns
	I1209 04:18:29.353134 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:29.398149 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:29.414349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:29.424339 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:29.457139 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:29.457219 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Running
	I1209 04:18:29.457246 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 04:18:29.457269 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 04:18:29.457327 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 04:18:29.457346 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:29.457383 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:29.457407 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:29.457428 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:29.457465 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 04:18:29.457488 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:29.457511 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:29.457548 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:29.457582 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 04:18:29.457621 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:29.457648 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 04:18:29.457670 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 04:18:29.457713 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.457739 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.457759 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Running
	I1209 04:18:29.457798 1581510 system_pods.go:126] duration metric: took 1.1436685s to wait for k8s-apps to be running ...
	I1209 04:18:29.457824 1581510 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 04:18:29.457946 1581510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:18:29.484181 1581510 system_svc.go:56] duration metric: took 26.349496ms WaitForService to wait for kubelet
	I1209 04:18:29.484279 1581510 kubeadm.go:587] duration metric: took 43.641254671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:18:29.484331 1581510 node_conditions.go:102] verifying NodePressure condition ...
	I1209 04:18:29.494898 1581510 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 04:18:29.494976 1581510 node_conditions.go:123] node cpu capacity is 2
	I1209 04:18:29.495002 1581510 node_conditions.go:105] duration metric: took 10.649262ms to run NodePressure ...
	I1209 04:18:29.495041 1581510 start.go:242] waiting for startup goroutines ...
	I1209 04:18:29.852422 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:29.891810 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:29.914055 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:29.914132 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:30.351528 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:30.391565 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:30.413420 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:30.413903 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:30.851536 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:30.890443 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:30.914238 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:30.914416 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:31.351798 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:31.393386 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:31.414428 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:31.415241 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:31.850707 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:31.891045 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:31.914638 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:31.914801 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:32.350217 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:32.391515 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:32.414475 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:32.414647 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:32.854463 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:32.958227 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:32.958324 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:32.959783 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:33.351085 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:33.425353 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:33.437218 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:33.437349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:33.852120 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:33.891624 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:33.915712 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:33.916281 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:34.350841 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:34.391412 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:34.417905 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:34.419244 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:34.855303 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:34.892416 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:34.915656 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:34.916265 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:35.354156 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:35.390941 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:35.415548 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:35.416066 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:35.851263 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:35.891580 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:35.914791 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:35.914928 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:36.350544 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:36.390714 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:36.414733 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:36.415180 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:36.851304 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:36.891467 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:36.915486 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:36.915869 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:37.350837 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:37.391830 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:37.414714 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:37.415092 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:37.851969 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:37.891401 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:37.916900 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:37.916998 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:38.349636 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:38.390413 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:38.414772 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:38.414995 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:38.850814 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:38.890871 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:38.914695 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:38.914909 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:39.351261 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:39.392226 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:39.414124 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:39.414273 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:39.850225 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:39.891341 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:39.915608 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:39.916305 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:40.351393 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:40.391202 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:40.414134 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:40.414219 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:40.852575 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:40.891447 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:40.914330 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:40.914415 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:41.351722 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:41.390684 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:41.413168 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:41.413355 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:41.850067 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:41.890674 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:41.913999 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:41.914134 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:42.351625 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:42.390947 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:42.413894 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:42.414051 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:42.851424 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:42.891288 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:42.915459 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:42.915931 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:43.351294 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:43.391422 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:43.416087 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:43.416596 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:43.850801 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:43.891376 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:43.915723 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:43.916263 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:44.351737 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:44.390845 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:44.415075 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:44.415732 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:44.850656 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:44.891032 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:44.914952 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:44.915519 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:45.351151 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:45.391401 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:45.415861 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:45.416558 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:45.851644 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:45.891265 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:45.914613 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:45.915133 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:46.351457 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:46.391568 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:46.414243 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:46.414511 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:46.850234 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:46.892340 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:46.915419 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:46.915945 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:47.352258 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:47.391604 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:47.416342 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:47.416760 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:47.851359 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:47.891235 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:47.913860 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:47.914118 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:48.350698 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:48.390843 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:48.414530 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:48.414736 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:48.850290 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:48.891383 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:48.914492 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:48.914774 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:49.349934 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:49.391963 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:49.414479 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:49.414551 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:49.851272 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:49.893030 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:49.917080 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:49.919504 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:50.350202 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:50.391992 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:50.415445 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:50.415818 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:50.860731 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:50.956832 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:50.957365 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:50.957847 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:51.351171 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:51.393398 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:51.417591 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:51.418163 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:51.854624 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:51.891642 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:51.915489 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:51.916046 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:52.351995 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:52.392010 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:52.416275 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:52.416789 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:52.852034 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:52.890858 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:52.913388 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:52.914624 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:53.351049 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:53.391652 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:53.452060 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:53.452179 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:53.851583 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:53.890975 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:53.915244 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:53.915810 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:54.350559 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:54.391328 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:54.415512 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:54.415648 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:54.851434 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:54.891546 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:54.914880 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:54.915111 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:55.350694 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:55.391331 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:55.413912 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:55.414191 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:55.851642 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:55.894956 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:55.914057 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:55.914203 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:56.350087 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:56.390923 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:56.414002 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:56.414209 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:56.852128 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:56.951528 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:56.951662 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:56.951711 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:57.350419 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:57.390477 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:57.412737 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:57.412874 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:57.850449 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:57.891575 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:57.913050 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:57.913192 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:58.350530 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:58.390174 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:58.414202 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:58.414377 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:58.851068 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:58.890652 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:58.913952 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:58.914101 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:59.351047 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:59.390708 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:59.413297 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:59.413528 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:59.850640 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:59.891092 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:59.915559 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:59.915964 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:00.351554 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:00.390931 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:00.415432 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:00.415913 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:00.851154 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:00.891231 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:00.914010 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:00.914916 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:01.350140 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:01.390882 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:01.414238 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:01.414939 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:01.851470 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:01.890624 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:01.914406 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:01.914932 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:02.351311 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:02.391865 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:02.415245 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:02.415452 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:02.851400 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:02.891697 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:02.913465 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:02.913665 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:03.351028 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:03.392380 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:03.452608 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:03.453008 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:03.851057 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:03.891256 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:03.914073 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:03.914145 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:04.351032 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:04.390926 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:04.413557 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:04.413726 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:04.850480 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:04.891081 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:04.913924 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:04.914075 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:05.350863 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:05.391278 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:05.414976 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:05.415143 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:05.850785 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:05.890785 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:05.914714 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:05.914800 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:06.350791 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:06.390706 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:06.413336 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:06.413487 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:06.850383 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:06.891278 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:06.916188 1581510 kapi.go:107] duration metric: took 1m14.506628499s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 04:19:06.916555 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:07.351773 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:07.390946 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:07.413349 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:07.851281 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:07.893223 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:07.953294 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:08.353216 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:08.391125 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:08.413030 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:08.851872 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:08.891199 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:08.913990 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:09.350736 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:09.390502 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:09.413005 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:09.851107 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:09.891249 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:09.914467 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:10.352371 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:10.390988 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:10.412938 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:10.854666 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:10.890850 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:10.913047 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:11.350617 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:11.391590 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:11.414174 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:11.850735 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:11.893615 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:11.913149 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:12.360877 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:12.390772 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:12.413240 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:12.851856 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:12.953268 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:12.953395 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:13.358892 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:13.391488 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:13.412588 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:13.850286 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:13.891659 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:13.912799 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:14.350807 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:14.391802 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:14.414137 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:14.852491 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:14.891510 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:14.913190 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:15.350953 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:15.390857 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:15.413166 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:15.851370 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:15.895950 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:15.915568 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:16.351009 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:16.451903 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:16.452093 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:16.851291 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:16.891170 1581510 kapi.go:107] duration metric: took 1m22.003682613s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 04:19:16.892302 1581510 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377526 cluster.
	I1209 04:19:16.893612 1581510 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 04:19:16.895505 1581510 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 04:19:16.913838 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:17.349800 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:17.412996 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:17.851771 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:17.913158 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:18.350737 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:18.413465 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:18.849827 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:18.913706 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:19.351265 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:19.413262 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:19.853140 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:19.913370 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:20.351420 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:20.413922 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:20.851201 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:20.914045 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:21.350745 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:21.412784 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:21.850088 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:21.913227 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:22.351345 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:22.414066 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:22.864838 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:22.913921 1581510 kapi.go:107] duration metric: took 1m30.504359541s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 04:19:23.351533 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:23.852375 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:24.405354 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:24.850258 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:25.352184 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:25.862121 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:26.351021 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:26.851388 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:27.351241 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:27.851514 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:28.350432 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:28.849929 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:29.351428 1581510 kapi.go:107] duration metric: took 1m36.504776027s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 04:19:29.355861 1581510 out.go:179] * Enabled addons: inspektor-gadget, storage-provisioner, registry-creds, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1209 04:19:29.359424 1581510 addons.go:530] duration metric: took 1m43.515374933s for enable addons: enabled=[inspektor-gadget storage-provisioner registry-creds cloud-spanner nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1209 04:19:29.359488 1581510 start.go:247] waiting for cluster config update ...
	I1209 04:19:29.359518 1581510 start.go:256] writing updated cluster config ...
	I1209 04:19:29.360948 1581510 ssh_runner.go:195] Run: rm -f paused
	I1209 04:19:29.367561 1581510 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 04:19:29.371352 1581510 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rvbf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.376865 1581510 pod_ready.go:94] pod "coredns-66bc5c9577-rvbf9" is "Ready"
	I1209 04:19:29.376896 1581510 pod_ready.go:86] duration metric: took 5.514131ms for pod "coredns-66bc5c9577-rvbf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.379068 1581510 pod_ready.go:83] waiting for pod "etcd-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.383892 1581510 pod_ready.go:94] pod "etcd-addons-377526" is "Ready"
	I1209 04:19:29.383918 1581510 pod_ready.go:86] duration metric: took 4.821906ms for pod "etcd-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.386115 1581510 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.391395 1581510 pod_ready.go:94] pod "kube-apiserver-addons-377526" is "Ready"
	I1209 04:19:29.391429 1581510 pod_ready.go:86] duration metric: took 5.281653ms for pod "kube-apiserver-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.393727 1581510 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.772147 1581510 pod_ready.go:94] pod "kube-controller-manager-addons-377526" is "Ready"
	I1209 04:19:29.772183 1581510 pod_ready.go:86] duration metric: took 378.431716ms for pod "kube-controller-manager-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.971715 1581510 pod_ready.go:83] waiting for pod "kube-proxy-vrrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.371795 1581510 pod_ready.go:94] pod "kube-proxy-vrrb5" is "Ready"
	I1209 04:19:30.371833 1581510 pod_ready.go:86] duration metric: took 400.092091ms for pod "kube-proxy-vrrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.572457 1581510 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.972046 1581510 pod_ready.go:94] pod "kube-scheduler-addons-377526" is "Ready"
	I1209 04:19:30.972075 1581510 pod_ready.go:86] duration metric: took 399.590859ms for pod "kube-scheduler-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.972087 1581510 pod_ready.go:40] duration metric: took 1.604494789s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 04:19:31.056572 1581510 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1209 04:19:31.059659 1581510 out.go:179] * Done! kubectl is now configured to use "addons-377526" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 04:22:13 addons-377526 crio[832]: time="2025-12-09T04:22:13.652646572Z" level=info msg="Removed container e615885d0725253b482b1bffe88728f16030867d715e4b2ca6b8bbde02b7734d: kube-system/registry-creds-764b6fb674-hdrg9/registry-creds" id=5ea6e8d3-cd3e-44a2-bf14-8e3b659e3e61 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.7250233Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-qpq98/POD" id=988bb89e-5cd6-42a8-9527-41cbbba3c2b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.725106321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.743095389Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qpq98 Namespace:default ID:3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0 UID:0e0ffa76-d4e4-406b-8017-ba7c04c3b671 NetNS:/var/run/netns/031e89cc-c897-4d01-b23c-6c65e69a5236 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b790}] Aliases:map[]}"
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.743266558Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-qpq98 to CNI network \"kindnet\" (type=ptp)"
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.770307867Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qpq98 Namespace:default ID:3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0 UID:0e0ffa76-d4e4-406b-8017-ba7c04c3b671 NetNS:/var/run/netns/031e89cc-c897-4d01-b23c-6c65e69a5236 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b790}] Aliases:map[]}"
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.770458614Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-qpq98 for CNI network kindnet (type=ptp)"
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.779684469Z" level=info msg="Ran pod sandbox 3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0 with infra container: default/hello-world-app-5d498dc89-qpq98/POD" id=988bb89e-5cd6-42a8-9527-41cbbba3c2b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.780957327Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3c810cc8-14ee-48e1-b97f-31b354aaa523 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.781117837Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3c810cc8-14ee-48e1-b97f-31b354aaa523 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.781173731Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3c810cc8-14ee-48e1-b97f-31b354aaa523 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.78205063Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bf0641c0-7b38-4ce5-982e-6c6032e705d6 name=/runtime.v1.ImageService/PullImage
	Dec 09 04:22:27 addons-377526 crio[832]: time="2025-12-09T04:22:27.789615103Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.427395044Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=bf0641c0-7b38-4ce5-982e-6c6032e705d6 name=/runtime.v1.ImageService/PullImage
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.428252021Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=db0f9114-378d-4ef2-9e8a-b1db7fe53da2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.432716623Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=effbb73e-3d8d-4eda-9e2f-4ab9b609a838 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.443519511Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-qpq98/hello-world-app" id=c953b4cd-6a1e-46ef-9197-d60e304d9087 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.443822743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.452716066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.453721713Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/704842a7e766df3ff65c7a97650e259b768d0069ebd68c70271839db61b09c2f/merged/etc/passwd: no such file or directory"
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.453861538Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/704842a7e766df3ff65c7a97650e259b768d0069ebd68c70271839db61b09c2f/merged/etc/group: no such file or directory"
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.454309838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.475522733Z" level=info msg="Created container 26d67f3f0df321044a9a915c39f486539d5ca4053b8c03429944a6d58482f30f: default/hello-world-app-5d498dc89-qpq98/hello-world-app" id=c953b4cd-6a1e-46ef-9197-d60e304d9087 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.479306073Z" level=info msg="Starting container: 26d67f3f0df321044a9a915c39f486539d5ca4053b8c03429944a6d58482f30f" id=d0088fec-0043-4c3b-8ca3-773e898b4a78 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 04:22:28 addons-377526 crio[832]: time="2025-12-09T04:22:28.483977052Z" level=info msg="Started container" PID=7059 containerID=26d67f3f0df321044a9a915c39f486539d5ca4053b8c03429944a6d58482f30f description=default/hello-world-app-5d498dc89-qpq98/hello-world-app id=d0088fec-0043-4c3b-8ca3-773e898b4a78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	26d67f3f0df32       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   3acc4f88f354a       hello-world-app-5d498dc89-qpq98             default
	7644b8e322bd8       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             16 seconds ago           Exited              registry-creds                           4                   a4e09d1508cac       registry-creds-764b6fb674-hdrg9             kube-system
	1fad626f8298c       cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1                                                                             2 minutes ago            Running             nginx                                    0                   c6f1f1dd11ce9       nginx                                       default
	9e1023fff88d1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   b9653476c6ce3       busybox                                     default
	c04442e39ef35       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	7aebdd3431a65       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	069b46a278cfb       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    3                   6896fc22efa77       ingress-nginx-admission-patch-tj9l7         ingress-nginx
	18febaede59c2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	8cf0b6bd32f5b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	878f8c9d656ac       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   ef16253316274       ingress-nginx-controller-85d4c799dd-m8q7w   ingress-nginx
	106f96ef6ff47       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   fc72b833d53fc       gcp-auth-78565c9fb4-r5mtf                   gcp-auth
	174130d7501d2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	b6aeff0b9c015       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   7ec9fc2954019       gadget-s4rfv                                gadget
	a69a96490b5ae       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   c262bb4bd18a7       csi-hostpath-resizer-0                      kube-system
	cc37fac1bc08a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   cafc9421507a5       registry-proxy-nlsrb                        kube-system
	9489ae99adda3       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   4d300aff6fa93       nvidia-device-plugin-daemonset-qpgbq        kube-system
	197524f2c1763       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	bb6380f4073cd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   72773acb3780f       ingress-nginx-admission-create-stzpv        ingress-nginx
	18c00d5991fec       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   28b3b2771bf0d       yakd-dashboard-5ff678cb9-lzq6q              yakd-dashboard
	8d2bef8d89158       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   607f4d61a1ea2       snapshot-controller-7d9fbc56b8-zphwq        kube-system
	e89fcd7e7a651       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   c4c12cc03fe0f       csi-hostpath-attacher-0                     kube-system
	9b3a8c868c3c9       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   18260dfb1d93f       kube-ingress-dns-minikube                   kube-system
	05aaaea2b06ae       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   3f6869f446610       local-path-provisioner-648f6765c9-zx7zg     local-path-storage
	d448cac096a04       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   de397074db7ec       snapshot-controller-7d9fbc56b8-tx8sz        kube-system
	a549d8652b346       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   92c76b38d4431       registry-6b586f9694-pd2mr                   kube-system
	9dce99afd0d08       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   5e6addc9258c8       cloud-spanner-emulator-5bdddb765-bg8ss      default
	365b8c540ac8b       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   9ac52df87dbe2       metrics-server-85b7d694d7-pckkq             kube-system
	ade186251b0b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   8cf0a1c4ce903       storage-provisioner                         kube-system
	895a853e4aab3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   c5f1ddbb5e9e9       coredns-66bc5c9577-rvbf9                    kube-system
	3f583b93b3d82       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   07ed767d5928d       kube-proxy-vrrb5                            kube-system
	f23d383bb9010       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   1d2485236b7c6       kindnet-whbx4                               kube-system
	3e19f8eb0be86       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   d274abaf3b590       kube-apiserver-addons-377526                kube-system
	5f20869a412bb       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   dc35a8dbc37d6       kube-controller-manager-addons-377526       kube-system
	23444ddd657bb       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   e7bdf487e5df3       kube-scheduler-addons-377526                kube-system
	3d9befd5158d0       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   5753de0a77b15       etcd-addons-377526                          kube-system
	
	
	==> coredns [895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac] <==
	[INFO] 10.244.0.15:37659 - 24991 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002434535s
	[INFO] 10.244.0.15:37659 - 6094 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000122414s
	[INFO] 10.244.0.15:37659 - 54089 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000164932s
	[INFO] 10.244.0.15:52128 - 63119 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168749s
	[INFO] 10.244.0.15:52128 - 62930 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104338s
	[INFO] 10.244.0.15:49491 - 51566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097248s
	[INFO] 10.244.0.15:49491 - 51386 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075587s
	[INFO] 10.244.0.15:43427 - 56565 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092711s
	[INFO] 10.244.0.15:43427 - 56128 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000191805s
	[INFO] 10.244.0.15:53242 - 29814 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001249998s
	[INFO] 10.244.0.15:53242 - 29601 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001398677s
	[INFO] 10.244.0.15:34182 - 31941 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142754s
	[INFO] 10.244.0.15:34182 - 31505 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174615s
	[INFO] 10.244.0.20:59914 - 8191 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154455s
	[INFO] 10.244.0.20:45159 - 24076 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000128584s
	[INFO] 10.244.0.20:46861 - 4752 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101474s
	[INFO] 10.244.0.20:43657 - 30263 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147144s
	[INFO] 10.244.0.20:46150 - 21464 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106381s
	[INFO] 10.244.0.20:34507 - 33229 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096863s
	[INFO] 10.244.0.20:47061 - 28902 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002729832s
	[INFO] 10.244.0.20:51136 - 64271 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003465068s
	[INFO] 10.244.0.20:44624 - 31626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000849485s
	[INFO] 10.244.0.20:42413 - 47439 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.00135518s
	[INFO] 10.244.0.23:49734 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207018s
	[INFO] 10.244.0.23:52493 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000198099s
	
	
	==> describe nodes <==
	Name:               addons-377526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-377526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=addons-377526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T04_17_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377526
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-377526"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 04:17:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377526
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 04:22:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 04:21:14 +0000   Tue, 09 Dec 2025 04:17:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 04:21:14 +0000   Tue, 09 Dec 2025 04:17:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 04:21:14 +0000   Tue, 09 Dec 2025 04:17:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 04:21:14 +0000   Tue, 09 Dec 2025 04:18:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-377526
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                da83b65b-98c5-4850-a34f-46ba26303299
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     cloud-spanner-emulator-5bdddb765-bg8ss       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  default                     hello-world-app-5d498dc89-qpq98              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  gadget                      gadget-s4rfv                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  gcp-auth                    gcp-auth-78565c9fb4-r5mtf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-m8q7w    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m37s
	  kube-system                 coredns-66bc5c9577-rvbf9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m43s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 csi-hostpathplugin-865n6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 etcd-addons-377526                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m48s
	  kube-system                 kindnet-whbx4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m43s
	  kube-system                 kube-apiserver-addons-377526                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-controller-manager-addons-377526        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-proxy-vrrb5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-scheduler-addons-377526                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 metrics-server-85b7d694d7-pckkq              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m38s
	  kube-system                 nvidia-device-plugin-daemonset-qpgbq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 registry-6b586f9694-pd2mr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 registry-creds-764b6fb674-hdrg9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-proxy-nlsrb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-tx8sz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-zphwq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  local-path-storage          local-path-provisioner-648f6765c9-zx7zg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lzq6q               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   Starting                 4m57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node addons-377526 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node addons-377526 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m56s (x8 over 4m56s)  kubelet          Node addons-377526 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m48s                  kubelet          Node addons-377526 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m48s                  kubelet          Node addons-377526 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m48s                  kubelet          Node addons-377526 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m44s                  node-controller  Node addons-377526 event: Registered Node addons-377526 in Controller
	  Normal   NodeReady                4m1s                   kubelet          Node addons-377526 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e] <==
	{"level":"warn","ts":"2025-12-09T04:17:37.410946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.425336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.463716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.473094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.495097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.503233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.518671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.532924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.548150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.565314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.583531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.595640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.609358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.624201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.646089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.673294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.690866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.702712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.769249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:52.993204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:53.014095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.447842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.462965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.492111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.507859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46364","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [106f96ef6ff47529e324d2dc323a5c98b17fab4f289dd307202e27b7bde1f08d] <==
	2025/12/09 04:19:15 GCP Auth Webhook started!
	2025/12/09 04:19:31 Ready to marshal response ...
	2025/12/09 04:19:31 Ready to write response ...
	2025/12/09 04:19:31 Ready to marshal response ...
	2025/12/09 04:19:31 Ready to write response ...
	2025/12/09 04:19:31 Ready to marshal response ...
	2025/12/09 04:19:31 Ready to write response ...
	2025/12/09 04:19:51 Ready to marshal response ...
	2025/12/09 04:19:51 Ready to write response ...
	2025/12/09 04:19:55 Ready to marshal response ...
	2025/12/09 04:19:55 Ready to write response ...
	2025/12/09 04:19:55 Ready to marshal response ...
	2025/12/09 04:19:55 Ready to write response ...
	2025/12/09 04:20:03 Ready to marshal response ...
	2025/12/09 04:20:03 Ready to write response ...
	2025/12/09 04:20:06 Ready to marshal response ...
	2025/12/09 04:20:06 Ready to write response ...
	2025/12/09 04:20:09 Ready to marshal response ...
	2025/12/09 04:20:09 Ready to write response ...
	2025/12/09 04:20:18 Ready to marshal response ...
	2025/12/09 04:20:18 Ready to write response ...
	2025/12/09 04:22:27 Ready to marshal response ...
	2025/12/09 04:22:27 Ready to write response ...
	
	
	==> kernel <==
	 04:22:29 up  9:04,  0 user,  load average: 1.06, 1.64, 1.57
	Linux addons-377526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef] <==
	I1209 04:20:27.624941       1 main.go:301] handling current node
	I1209 04:20:37.630847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:20:37.630888       1 main.go:301] handling current node
	I1209 04:20:47.625424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:20:47.625459       1 main.go:301] handling current node
	I1209 04:20:57.632496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:20:57.632532       1 main.go:301] handling current node
	I1209 04:21:07.630708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:21:07.630744       1 main.go:301] handling current node
	I1209 04:21:17.625645       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:21:17.625757       1 main.go:301] handling current node
	I1209 04:21:27.628058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:21:27.628116       1 main.go:301] handling current node
	I1209 04:21:37.627968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:21:37.628002       1 main.go:301] handling current node
	I1209 04:21:47.626807       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:21:47.626844       1 main.go:301] handling current node
	I1209 04:21:57.624771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:21:57.624883       1 main.go:301] handling current node
	I1209 04:22:07.632710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:22:07.632745       1 main.go:301] handling current node
	I1209 04:22:17.625049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:22:17.625099       1 main.go:301] handling current node
	I1209 04:22:27.624747       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:22:27.624851       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7] <==
	W1209 04:18:15.492005       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 04:18:15.507378       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 04:18:28.058712       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.8.221:443: connect: connection refused
	E1209 04:18:28.058832       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.8.221:443: connect: connection refused" logger="UnhandledError"
	W1209 04:18:28.059406       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.8.221:443: connect: connection refused
	E1209 04:18:28.059482       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.8.221:443: connect: connection refused" logger="UnhandledError"
	W1209 04:18:28.162112       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.8.221:443: connect: connection refused
	E1209 04:18:28.162159       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.8.221:443: connect: connection refused" logger="UnhandledError"
	E1209 04:18:44.472799       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	W1209 04:18:44.472889       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 04:18:44.472941       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1209 04:18:44.473599       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	E1209 04:18:44.480165       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	E1209 04:18:44.499299       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	I1209 04:18:44.649952       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 04:19:40.172294       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34630: use of closed network connection
	E1209 04:19:40.417738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34656: use of closed network connection
	E1209 04:19:40.567760       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34672: use of closed network connection
	I1209 04:20:08.975411       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 04:20:09.373534       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.242.154"}
	I1209 04:20:15.736119       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 04:22:27.598609       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.207.209"}
	
	
	==> kube-controller-manager [5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab] <==
	I1209 04:17:45.470589       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 04:17:45.470830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1209 04:17:45.470908       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1209 04:17:45.470553       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 04:17:45.471023       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 04:17:45.471086       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 04:17:45.471154       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-377526"
	I1209 04:17:45.471192       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 04:17:45.471239       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1209 04:17:45.470782       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 04:17:45.470610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1209 04:17:45.473792       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 04:17:45.471250       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 04:17:45.475823       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 04:17:45.473800       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 04:17:45.481298       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1209 04:17:51.549722       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1209 04:18:15.440125       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 04:18:15.440280       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1209 04:18:15.440343       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1209 04:18:15.480271       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1209 04:18:15.485099       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1209 04:18:15.540694       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 04:18:15.586310       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 04:18:30.481590       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f] <==
	I1209 04:17:47.578106       1 server_linux.go:53] "Using iptables proxy"
	I1209 04:17:47.681299       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 04:17:47.782240       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 04:17:47.782272       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1209 04:17:47.782359       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 04:17:47.830228       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 04:17:47.830284       1 server_linux.go:132] "Using iptables Proxier"
	I1209 04:17:47.844499       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 04:17:47.845834       1 server.go:527] "Version info" version="v1.34.2"
	I1209 04:17:47.845860       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 04:17:47.847423       1 config.go:200] "Starting service config controller"
	I1209 04:17:47.847435       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 04:17:47.847452       1 config.go:106] "Starting endpoint slice config controller"
	I1209 04:17:47.847456       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 04:17:47.847466       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 04:17:47.847477       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 04:17:47.848159       1 config.go:309] "Starting node config controller"
	I1209 04:17:47.848167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 04:17:47.848173       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 04:17:47.948128       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 04:17:47.948174       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 04:17:47.948214       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769] <==
	E1209 04:17:38.487307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 04:17:38.487404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 04:17:38.487452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 04:17:38.487471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 04:17:38.487425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 04:17:38.489680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 04:17:38.489822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 04:17:38.499826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 04:17:38.499992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 04:17:38.500128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 04:17:38.500302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 04:17:38.500418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 04:17:39.406004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 04:17:39.530207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 04:17:39.587243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 04:17:39.589592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 04:17:39.595991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 04:17:39.630506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 04:17:39.630793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1209 04:17:39.636364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 04:17:39.684539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 04:17:39.714113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 04:17:39.752757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 04:17:39.761557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1209 04:17:42.859244       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 04:21:38 addons-377526 kubelet[1267]: I1209 04:21:38.026113    1267 scope.go:117] "RemoveContainer" containerID="e615885d0725253b482b1bffe88728f16030867d715e4b2ca6b8bbde02b7734d"
	Dec 09 04:21:38 addons-377526 kubelet[1267]: E1209 04:21:38.026278    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hdrg9_kube-system(6de1311b-03a7-4949-9055-39d7b8dbf7fe)\"" pod="kube-system/registry-creds-764b6fb674-hdrg9" podUID="6de1311b-03a7-4949-9055-39d7b8dbf7fe"
	Dec 09 04:21:39 addons-377526 kubelet[1267]: I1209 04:21:39.027363    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qpgbq" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:21:40 addons-377526 kubelet[1267]: I1209 04:21:40.025742    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nlsrb" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:21:50 addons-377526 kubelet[1267]: I1209 04:21:50.026053    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hdrg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:21:50 addons-377526 kubelet[1267]: I1209 04:21:50.026146    1267 scope.go:117] "RemoveContainer" containerID="e615885d0725253b482b1bffe88728f16030867d715e4b2ca6b8bbde02b7734d"
	Dec 09 04:21:50 addons-377526 kubelet[1267]: E1209 04:21:50.026314    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hdrg9_kube-system(6de1311b-03a7-4949-9055-39d7b8dbf7fe)\"" pod="kube-system/registry-creds-764b6fb674-hdrg9" podUID="6de1311b-03a7-4949-9055-39d7b8dbf7fe"
	Dec 09 04:22:02 addons-377526 kubelet[1267]: I1209 04:22:02.025603    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hdrg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:22:02 addons-377526 kubelet[1267]: I1209 04:22:02.025688    1267 scope.go:117] "RemoveContainer" containerID="e615885d0725253b482b1bffe88728f16030867d715e4b2ca6b8bbde02b7734d"
	Dec 09 04:22:02 addons-377526 kubelet[1267]: E1209 04:22:02.025849    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hdrg9_kube-system(6de1311b-03a7-4949-9055-39d7b8dbf7fe)\"" pod="kube-system/registry-creds-764b6fb674-hdrg9" podUID="6de1311b-03a7-4949-9055-39d7b8dbf7fe"
	Dec 09 04:22:13 addons-377526 kubelet[1267]: I1209 04:22:13.026303    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hdrg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:22:13 addons-377526 kubelet[1267]: I1209 04:22:13.026384    1267 scope.go:117] "RemoveContainer" containerID="e615885d0725253b482b1bffe88728f16030867d715e4b2ca6b8bbde02b7734d"
	Dec 09 04:22:13 addons-377526 kubelet[1267]: I1209 04:22:13.635170    1267 scope.go:117] "RemoveContainer" containerID="e615885d0725253b482b1bffe88728f16030867d715e4b2ca6b8bbde02b7734d"
	Dec 09 04:22:13 addons-377526 kubelet[1267]: I1209 04:22:13.635836    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hdrg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:22:13 addons-377526 kubelet[1267]: I1209 04:22:13.635899    1267 scope.go:117] "RemoveContainer" containerID="7644b8e322bd8dcac5ed6a0c7a43e03e280db1d7a755252cb463ff25a127bd00"
	Dec 09 04:22:13 addons-377526 kubelet[1267]: E1209 04:22:13.636232    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hdrg9_kube-system(6de1311b-03a7-4949-9055-39d7b8dbf7fe)\"" pod="kube-system/registry-creds-764b6fb674-hdrg9" podUID="6de1311b-03a7-4949-9055-39d7b8dbf7fe"
	Dec 09 04:22:17 addons-377526 kubelet[1267]: I1209 04:22:17.027240    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-pd2mr" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:22:27 addons-377526 kubelet[1267]: E1209 04:22:27.427162    1267 status_manager.go:1018] "Failed to get status for pod" err="pods \"hello-world-app-5d498dc89-qpq98\" is forbidden: User \"system:node:addons-377526\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-377526' and this object" podUID="0e0ffa76-d4e4-406b-8017-ba7c04c3b671" pod="default/hello-world-app-5d498dc89-qpq98"
	Dec 09 04:22:27 addons-377526 kubelet[1267]: I1209 04:22:27.510164    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfpv2\" (UniqueName: \"kubernetes.io/projected/0e0ffa76-d4e4-406b-8017-ba7c04c3b671-kube-api-access-vfpv2\") pod \"hello-world-app-5d498dc89-qpq98\" (UID: \"0e0ffa76-d4e4-406b-8017-ba7c04c3b671\") " pod="default/hello-world-app-5d498dc89-qpq98"
	Dec 09 04:22:27 addons-377526 kubelet[1267]: I1209 04:22:27.510459    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0e0ffa76-d4e4-406b-8017-ba7c04c3b671-gcp-creds\") pod \"hello-world-app-5d498dc89-qpq98\" (UID: \"0e0ffa76-d4e4-406b-8017-ba7c04c3b671\") " pod="default/hello-world-app-5d498dc89-qpq98"
	Dec 09 04:22:27 addons-377526 kubelet[1267]: W1209 04:22:27.778992    1267 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/crio-3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0 WatchSource:0}: Error finding container 3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0: Status 404 returned error can't find the container with id 3acc4f88f354a13547b2607bfec8004e2a9f63c5750ba0529e98f4026c15cda0
	Dec 09 04:22:28 addons-377526 kubelet[1267]: I1209 04:22:28.025834    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hdrg9" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 04:22:28 addons-377526 kubelet[1267]: I1209 04:22:28.026089    1267 scope.go:117] "RemoveContainer" containerID="7644b8e322bd8dcac5ed6a0c7a43e03e280db1d7a755252cb463ff25a127bd00"
	Dec 09 04:22:28 addons-377526 kubelet[1267]: E1209 04:22:28.026715    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hdrg9_kube-system(6de1311b-03a7-4949-9055-39d7b8dbf7fe)\"" pod="kube-system/registry-creds-764b6fb674-hdrg9" podUID="6de1311b-03a7-4949-9055-39d7b8dbf7fe"
	Dec 09 04:22:28 addons-377526 kubelet[1267]: I1209 04:22:28.723519    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-qpq98" podStartSLOduration=1.075529482 podStartE2EDuration="1.723495214s" podCreationTimestamp="2025-12-09 04:22:27 +0000 UTC" firstStartedPulling="2025-12-09 04:22:27.781429832 +0000 UTC m=+286.866501458" lastFinishedPulling="2025-12-09 04:22:28.429395573 +0000 UTC m=+287.514467190" observedRunningTime="2025-12-09 04:22:28.715128624 +0000 UTC m=+287.800200267" watchObservedRunningTime="2025-12-09 04:22:28.723495214 +0000 UTC m=+287.808566832"
	
	
	==> storage-provisioner [ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af] <==
	W1209 04:22:04.535289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:06.538352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:06.543322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:08.547148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:08.551966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:10.555690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:10.561095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:12.564578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:12.571560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:14.575695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:14.580770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:16.584522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:16.591389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:18.594474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:18.601714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:20.605722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:20.610320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:22.614069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:22.621115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:24.624742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:24.631355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:26.634987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:26.639092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:28.642484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:22:28.648835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-377526 -n addons-377526
helpers_test.go:269: (dbg) Run:  kubectl --context addons-377526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-377526 describe pod ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-377526 describe pod ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7: exit status 1 (103.1677ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-stzpv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tj9l7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-377526 describe pod ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7: exit status 1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (320.995735ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:22:30.629374 1590841 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:22:30.630178 1590841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:22:30.630193 1590841 out.go:374] Setting ErrFile to fd 2...
	I1209 04:22:30.630197 1590841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:22:30.630620 1590841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:22:30.631527 1590841 mustload.go:66] Loading cluster: addons-377526
	I1209 04:22:30.631965 1590841 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:22:30.632009 1590841 addons.go:622] checking whether the cluster is paused
	I1209 04:22:30.632143 1590841 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:22:30.632180 1590841 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:22:30.632751 1590841 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:22:30.654720 1590841 ssh_runner.go:195] Run: systemctl --version
	I1209 04:22:30.654780 1590841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:22:30.673393 1590841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:22:30.781316 1590841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:22:30.781411 1590841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:22:30.835809 1590841 cri.go:89] found id: "7644b8e322bd8dcac5ed6a0c7a43e03e280db1d7a755252cb463ff25a127bd00"
	I1209 04:22:30.835833 1590841 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:22:30.835840 1590841 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:22:30.835845 1590841 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:22:30.835848 1590841 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:22:30.835852 1590841 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:22:30.835855 1590841 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:22:30.835859 1590841 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:22:30.835862 1590841 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:22:30.835868 1590841 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:22:30.835872 1590841 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:22:30.835875 1590841 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:22:30.835878 1590841 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:22:30.835882 1590841 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:22:30.835885 1590841 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:22:30.835893 1590841 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:22:30.835896 1590841 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:22:30.835901 1590841 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:22:30.835904 1590841 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:22:30.835908 1590841 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:22:30.835914 1590841 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:22:30.835917 1590841 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:22:30.835921 1590841 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:22:30.835924 1590841 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:22:30.835928 1590841 cri.go:89] found id: ""
	I1209 04:22:30.835982 1590841 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:22:30.875841 1590841 out.go:203] 
	W1209 04:22:30.878833 1590841 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:22:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:22:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:22:30.878918 1590841 out.go:285] * 
	* 
	W1209 04:22:30.887221 1590841 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:22:30.890365 1590841 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable ingress --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable ingress --alsologtostderr -v=1: exit status 11 (279.711593ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:22:30.951810 1590962 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:22:30.952490 1590962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:22:30.952511 1590962 out.go:374] Setting ErrFile to fd 2...
	I1209 04:22:30.952519 1590962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:22:30.952913 1590962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:22:30.953650 1590962 mustload.go:66] Loading cluster: addons-377526
	I1209 04:22:30.954089 1590962 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:22:30.954127 1590962 addons.go:622] checking whether the cluster is paused
	I1209 04:22:30.954272 1590962 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:22:30.954303 1590962 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:22:30.954876 1590962 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:22:30.979395 1590962 ssh_runner.go:195] Run: systemctl --version
	I1209 04:22:30.979458 1590962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:22:30.998362 1590962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:22:31.113355 1590962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:22:31.113448 1590962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:22:31.143553 1590962 cri.go:89] found id: "7644b8e322bd8dcac5ed6a0c7a43e03e280db1d7a755252cb463ff25a127bd00"
	I1209 04:22:31.143626 1590962 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:22:31.143645 1590962 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:22:31.143665 1590962 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:22:31.143702 1590962 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:22:31.143724 1590962 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:22:31.143746 1590962 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:22:31.143779 1590962 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:22:31.143799 1590962 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:22:31.143817 1590962 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:22:31.143836 1590962 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:22:31.143867 1590962 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:22:31.143889 1590962 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:22:31.143908 1590962 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:22:31.143927 1590962 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:22:31.143957 1590962 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:22:31.143989 1590962 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:22:31.144010 1590962 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:22:31.144043 1590962 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:22:31.144065 1590962 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:22:31.144089 1590962 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:22:31.144106 1590962 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:22:31.144142 1590962 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:22:31.144160 1590962 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:22:31.144178 1590962 cri.go:89] found id: ""
	I1209 04:22:31.144260 1590962 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:22:31.159720 1590962 out.go:203] 
	W1209 04:22:31.162745 1590962 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:22:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:22:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:22:31.162778 1590962 out.go:285] * 
	* 
	W1209 04:22:31.170721 1590962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:22:31.173835 1590962 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (142.60s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-s4rfv" [4c09ff3a-bc44-4687-961a-692d1187530b] Running
addons_test.go:883: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00436919s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (267.402207ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:33.258489 1589639 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:33.259308 1589639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:33.259326 1589639 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:33.259333 1589639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:33.259714 1589639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:33.260076 1589639 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:33.260503 1589639 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:33.260525 1589639 addons.go:622] checking whether the cluster is paused
	I1209 04:20:33.260691 1589639 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:33.260728 1589639 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:33.261343 1589639 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:33.279614 1589639 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:33.279668 1589639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:33.298754 1589639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:33.405158 1589639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:33.405246 1589639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:33.437210 1589639 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:33.437280 1589639 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:33.437300 1589639 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:33.437320 1589639 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:33.437340 1589639 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:33.437376 1589639 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:33.437395 1589639 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:33.437414 1589639 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:33.437432 1589639 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:33.437467 1589639 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:33.437494 1589639 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:33.437515 1589639 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:33.437546 1589639 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:33.437565 1589639 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:33.437592 1589639 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:33.437629 1589639 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:33.437656 1589639 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:33.437678 1589639 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:33.437709 1589639 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:33.437728 1589639 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:33.437751 1589639 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:33.437786 1589639 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:33.437804 1589639 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:33.437823 1589639 cri.go:89] found id: ""
	I1209 04:20:33.437908 1589639 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:33.454007 1589639 out.go:203] 
	W1209 04:20:33.456932 1589639 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:33.456957 1589639 out.go:285] * 
	* 
	W1209 04:20:33.465002 1589639 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:33.467900 1589639 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:515: metrics-server stabilized in 3.951112ms
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Running
addons_test.go:517: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003348724s
addons_test.go:523: (dbg) Run:  kubectl --context addons-377526 top pods -n kube-system
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (345.685388ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:08.307750 1588937 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:08.308734 1588937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:08.308769 1588937 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:08.308789 1588937 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:08.309128 1588937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:08.309797 1588937 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:08.310440 1588937 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:08.310482 1588937 addons.go:622] checking whether the cluster is paused
	I1209 04:20:08.310729 1588937 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:08.310770 1588937 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:08.311375 1588937 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:08.346620 1588937 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:08.346675 1588937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:08.375175 1588937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:08.494110 1588937 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:08.494207 1588937 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:08.530235 1588937 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:08.530254 1588937 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:08.530260 1588937 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:08.530263 1588937 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:08.530267 1588937 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:08.530271 1588937 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:08.530274 1588937 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:08.530277 1588937 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:08.530281 1588937 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:08.530288 1588937 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:08.530292 1588937 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:08.530295 1588937 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:08.530300 1588937 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:08.530303 1588937 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:08.530306 1588937 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:08.530317 1588937 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:08.530321 1588937 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:08.530326 1588937 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:08.530329 1588937 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:08.530332 1588937 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:08.530337 1588937 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:08.530340 1588937 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:08.530343 1588937 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:08.530351 1588937 cri.go:89] found id: ""
	I1209 04:20:08.530403 1588937 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:08.552430 1588937 out.go:203] 
	W1209 04:20:08.558429 1588937 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:08.558533 1588937 out.go:285] * 
	* 
	W1209 04:20:08.566491 1588937 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:08.571223 1588937 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (22.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 04:20:04.216235 1580521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 04:20:04.220024 1580521 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 04:20:04.220057 1580521 kapi.go:107] duration metric: took 8.409798ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:609: csi-hostpath-driver pods stabilized in 8.420374ms
addons_test.go:612: (dbg) Run:  kubectl --context addons-377526 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-377526 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [40f2e51f-8e69-466e-bcbe-64f53db447b6] Pending
helpers_test.go:352: "task-pv-pod" [40f2e51f-8e69-466e-bcbe-64f53db447b6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [40f2e51f-8e69-466e-bcbe-64f53db447b6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.002954831s
addons_test.go:632: (dbg) Run:  kubectl --context addons-377526 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:637: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-377526 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-377526 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:642: (dbg) Run:  kubectl --context addons-377526 delete pod task-pv-pod
addons_test.go:648: (dbg) Run:  kubectl --context addons-377526 delete pvc hpvc
addons_test.go:654: (dbg) Run:  kubectl --context addons-377526 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:659: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:664: (dbg) Run:  kubectl --context addons-377526 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:669: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8054aa32-2021-4930-9d9c-f5fa9d4e7481] Pending
helpers_test.go:352: "task-pv-pod-restore" [8054aa32-2021-4930-9d9c-f5fa9d4e7481] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8054aa32-2021-4930-9d9c-f5fa9d4e7481] Running
addons_test.go:669: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003956513s
addons_test.go:674: (dbg) Run:  kubectl --context addons-377526 delete pod task-pv-pod-restore
addons_test.go:678: (dbg) Run:  kubectl --context addons-377526 delete pvc hpvc-restore
addons_test.go:682: (dbg) Run:  kubectl --context addons-377526 delete volumesnapshot new-snapshot-demo
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (272.952399ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:26.699049 1589534 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:26.699940 1589534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:26.699958 1589534 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:26.699965 1589534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:26.700302 1589534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:26.700756 1589534 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:26.701218 1589534 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:26.701240 1589534 addons.go:622] checking whether the cluster is paused
	I1209 04:20:26.701391 1589534 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:26.701411 1589534 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:26.701997 1589534 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:26.720809 1589534 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:26.720878 1589534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:26.738363 1589534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:26.845529 1589534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:26.845621 1589534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:26.881838 1589534 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:26.881862 1589534 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:26.881872 1589534 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:26.881876 1589534 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:26.881880 1589534 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:26.881883 1589534 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:26.881886 1589534 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:26.881889 1589534 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:26.881892 1589534 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:26.881899 1589534 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:26.881902 1589534 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:26.881905 1589534 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:26.881908 1589534 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:26.881912 1589534 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:26.881915 1589534 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:26.881921 1589534 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:26.881927 1589534 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:26.881931 1589534 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:26.881934 1589534 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:26.881937 1589534 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:26.881942 1589534 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:26.881945 1589534 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:26.881948 1589534 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:26.881951 1589534 cri.go:89] found id: ""
	I1209 04:20:26.882003 1589534 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:26.897894 1589534 out.go:203] 
	W1209 04:20:26.900783 1589534 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:26.900810 1589534 out.go:285] * 
	* 
	W1209 04:20:26.909117 1589534 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:26.912294 1589534 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (278.910432ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:26.965838 1589578 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:26.966851 1589578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:26.966907 1589578 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:26.966947 1589578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:26.967251 1589578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:26.967700 1589578 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:26.968167 1589578 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:26.968216 1589578 addons.go:622] checking whether the cluster is paused
	I1209 04:20:26.968359 1589578 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:26.968395 1589578 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:26.968975 1589578 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:26.989761 1589578 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:26.989815 1589578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:27.018059 1589578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:27.125614 1589578 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:27.125702 1589578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:27.155832 1589578 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:27.155855 1589578 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:27.155860 1589578 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:27.155864 1589578 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:27.155867 1589578 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:27.155871 1589578 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:27.155874 1589578 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:27.155878 1589578 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:27.155881 1589578 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:27.155887 1589578 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:27.155890 1589578 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:27.155893 1589578 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:27.155896 1589578 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:27.155899 1589578 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:27.155902 1589578 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:27.155908 1589578 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:27.155911 1589578 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:27.155915 1589578 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:27.155918 1589578 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:27.155921 1589578 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:27.155939 1589578 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:27.155944 1589578 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:27.155947 1589578 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:27.155950 1589578 cri.go:89] found id: ""
	I1209 04:20:27.156001 1589578 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:27.179263 1589578 out.go:203] 
	W1209 04:20:27.182192 1589578 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:27.182219 1589578 out.go:285] * 
	* 
	W1209 04:20:27.190126 1589578 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:27.193283 1589578 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (22.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:868: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-377526 --alsologtostderr -v=1
addons_test.go:868: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-377526 --alsologtostderr -v=1: exit status 11 (298.908589ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:19:40.952556 1587606 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:19:40.953440 1587606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:40.953456 1587606 out.go:374] Setting ErrFile to fd 2...
	I1209 04:19:40.953463 1587606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:40.953744 1587606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:19:40.954058 1587606 mustload.go:66] Loading cluster: addons-377526
	I1209 04:19:40.954443 1587606 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:40.954459 1587606 addons.go:622] checking whether the cluster is paused
	I1209 04:19:40.954623 1587606 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:40.954638 1587606 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:19:40.955172 1587606 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:19:40.973747 1587606 ssh_runner.go:195] Run: systemctl --version
	I1209 04:19:40.973803 1587606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:19:40.997063 1587606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:19:41.104987 1587606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:19:41.105087 1587606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:19:41.140658 1587606 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:19:41.140678 1587606 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:19:41.140683 1587606 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:19:41.140687 1587606 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:19:41.140691 1587606 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:19:41.140694 1587606 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:19:41.140697 1587606 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:19:41.140700 1587606 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:19:41.140703 1587606 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:19:41.140711 1587606 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:19:41.140714 1587606 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:19:41.140717 1587606 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:19:41.140720 1587606 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:19:41.140723 1587606 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:19:41.140726 1587606 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:19:41.140731 1587606 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:19:41.140734 1587606 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:19:41.140738 1587606 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:19:41.140741 1587606 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:19:41.140744 1587606 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:19:41.140749 1587606 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:19:41.140752 1587606 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:19:41.140755 1587606 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:19:41.140758 1587606 cri.go:89] found id: ""
	I1209 04:19:41.140805 1587606 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:19:41.160531 1587606 out.go:203] 
	W1209 04:19:41.163681 1587606 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:19:41.163702 1587606 out.go:285] * 
	* 
	W1209 04:19:41.171534 1587606 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:19:41.174620 1587606 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:870: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-377526 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-377526
helpers_test.go:243: (dbg) docker inspect addons-377526:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9",
	        "Created": "2025-12-09T04:17:16.302063351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1581901,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:17:16.363034845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/hosts",
	        "LogPath": "/var/lib/docker/containers/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9-json.log",
	        "Name": "/addons-377526",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-377526:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-377526",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9",
	                "LowerDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b9b90a408cecb1c1f1540c33ed7bd30543618811d9d78bf1cf983117fbb15c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-377526",
	                "Source": "/var/lib/docker/volumes/addons-377526/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-377526",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-377526",
	                "name.minikube.sigs.k8s.io": "addons-377526",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0eaeaed21825edaf1bad522f7a17c86d3db0cf1e084b8a616bbc6ae11d083e3",
	            "SandboxKey": "/var/run/docker/netns/e0eaeaed2182",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-377526": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:00:0f:b0:c4:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "542d04a282446d7d8563cb215ec8412fd0d13d00239eba6fd964d03646557a2d",
	                    "EndpointID": "7dd7e0b9fc86928ee481271349fa43f8523811d8ec609d6a5f0bc20f0aa26422",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-377526",
	                        "296d96ed0561"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-377526 -n addons-377526
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-377526 logs -n 25: (1.562079653s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-711071 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-711071   │ jenkins │ v1.37.0 │ 09 Dec 25 04:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ delete  │ -p download-only-711071                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-711071   │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ start   │ -o=json --download-only -p download-only-640851 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-640851   │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ delete  │ -p download-only-640851                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-640851   │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ start   │ -o=json --download-only -p download-only-306472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-306472   │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ delete  │ -p download-only-306472                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-306472   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ delete  │ -p download-only-711071                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-711071   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ delete  │ -p download-only-640851                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-640851   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ delete  │ -p download-only-306472                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-306472   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ start   │ --download-only -p download-docker-739882 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-739882 │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ delete  │ -p download-docker-739882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-739882 │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ start   │ --download-only -p binary-mirror-878510 --alsologtostderr --binary-mirror http://127.0.0.1:38315 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-878510   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ delete  │ -p binary-mirror-878510                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-878510   │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:17 UTC │
	│ addons  │ enable dashboard -p addons-377526                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ addons  │ disable dashboard -p addons-377526                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │                     │
	│ start   │ -p addons-377526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:17 UTC │ 09 Dec 25 04:19 UTC │
	│ addons  │ addons-377526 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ addons-377526 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-377526 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377526          │ jenkins │ v1.37.0 │ 09 Dec 25 04:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:17:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:17:10.635592 1581510 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:17:10.635777 1581510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:17:10.635810 1581510 out.go:374] Setting ErrFile to fd 2...
	I1209 04:17:10.635830 1581510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:17:10.636100 1581510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:17:10.636564 1581510 out.go:368] Setting JSON to false
	I1209 04:17:10.637470 1581510 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32371,"bootTime":1765221460,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:17:10.637561 1581510 start.go:143] virtualization:  
	I1209 04:17:10.640931 1581510 out.go:179] * [addons-377526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:17:10.644866 1581510 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:17:10.645037 1581510 notify.go:221] Checking for updates...
	I1209 04:17:10.648524 1581510 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:17:10.651409 1581510 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:17:10.654319 1581510 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:17:10.657260 1581510 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:17:10.660269 1581510 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:17:10.663576 1581510 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:17:10.687427 1581510 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:17:10.687569 1581510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:17:10.749213 1581510 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:17:10.740144851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:17:10.749323 1581510 docker.go:319] overlay module found
	I1209 04:17:10.752546 1581510 out.go:179] * Using the docker driver based on user configuration
	I1209 04:17:10.755462 1581510 start.go:309] selected driver: docker
	I1209 04:17:10.755486 1581510 start.go:927] validating driver "docker" against <nil>
	I1209 04:17:10.755500 1581510 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:17:10.756220 1581510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:17:10.816998 1581510 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:17:10.802629837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:17:10.817154 1581510 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:17:10.817379 1581510 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:17:10.820211 1581510 out.go:179] * Using Docker driver with root privileges
	I1209 04:17:10.823135 1581510 cni.go:84] Creating CNI manager for ""
	I1209 04:17:10.823208 1581510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:17:10.823223 1581510 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:17:10.823304 1581510 start.go:353] cluster config:
	{Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1209 04:17:10.828224 1581510 out.go:179] * Starting "addons-377526" primary control-plane node in "addons-377526" cluster
	I1209 04:17:10.831060 1581510 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:17:10.833871 1581510 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:17:10.836657 1581510 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:17:10.836707 1581510 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 04:17:10.836721 1581510 cache.go:65] Caching tarball of preloaded images
	I1209 04:17:10.836724 1581510 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:17:10.836809 1581510 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:17:10.836821 1581510 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 04:17:10.837194 1581510 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/config.json ...
	I1209 04:17:10.837224 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/config.json: {Name:mk525736410a35602d90482be6cfa75a8128ee96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:10.856451 1581510 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:17:10.856475 1581510 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:17:10.856495 1581510 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:17:10.856526 1581510 start.go:360] acquireMachinesLock for addons-377526: {Name:mk7b7abdce6736faefe4780e4882eb58e1ac6bd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:17:10.856654 1581510 start.go:364] duration metric: took 97.01µs to acquireMachinesLock for "addons-377526"
	I1209 04:17:10.856693 1581510 start.go:93] Provisioning new machine with config: &{Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:17:10.856766 1581510 start.go:125] createHost starting for "" (driver="docker")
	I1209 04:17:10.861918 1581510 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1209 04:17:10.862162 1581510 start.go:159] libmachine.API.Create for "addons-377526" (driver="docker")
	I1209 04:17:10.862202 1581510 client.go:173] LocalClient.Create starting
	I1209 04:17:10.862329 1581510 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem
	I1209 04:17:10.952835 1581510 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem
	I1209 04:17:11.005752 1581510 cli_runner.go:164] Run: docker network inspect addons-377526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 04:17:11.022092 1581510 cli_runner.go:211] docker network inspect addons-377526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 04:17:11.022175 1581510 network_create.go:284] running [docker network inspect addons-377526] to gather additional debugging logs...
	I1209 04:17:11.022203 1581510 cli_runner.go:164] Run: docker network inspect addons-377526
	W1209 04:17:11.038306 1581510 cli_runner.go:211] docker network inspect addons-377526 returned with exit code 1
	I1209 04:17:11.038339 1581510 network_create.go:287] error running [docker network inspect addons-377526]: docker network inspect addons-377526: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-377526 not found
	I1209 04:17:11.038354 1581510 network_create.go:289] output of [docker network inspect addons-377526]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-377526 not found
	
	** /stderr **
	I1209 04:17:11.038461 1581510 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:17:11.055227 1581510 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a8ee0}
	I1209 04:17:11.055274 1581510 network_create.go:124] attempt to create docker network addons-377526 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 04:17:11.055332 1581510 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-377526 addons-377526
	I1209 04:17:11.115693 1581510 network_create.go:108] docker network addons-377526 192.168.49.0/24 created
	I1209 04:17:11.115728 1581510 kic.go:121] calculated static IP "192.168.49.2" for the "addons-377526" container
	I1209 04:17:11.115823 1581510 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 04:17:11.132890 1581510 cli_runner.go:164] Run: docker volume create addons-377526 --label name.minikube.sigs.k8s.io=addons-377526 --label created_by.minikube.sigs.k8s.io=true
	I1209 04:17:11.151462 1581510 oci.go:103] Successfully created a docker volume addons-377526
	I1209 04:17:11.151560 1581510 cli_runner.go:164] Run: docker run --rm --name addons-377526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377526 --entrypoint /usr/bin/test -v addons-377526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 04:17:12.264156 1581510 cli_runner.go:217] Completed: docker run --rm --name addons-377526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377526 --entrypoint /usr/bin/test -v addons-377526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib: (1.112539146s)
	I1209 04:17:12.264190 1581510 oci.go:107] Successfully prepared a docker volume addons-377526
	I1209 04:17:12.264230 1581510 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:17:12.264248 1581510 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 04:17:12.264312 1581510 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377526:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 04:17:16.233694 1581510 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377526:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.969342689s)
	I1209 04:17:16.233741 1581510 kic.go:203] duration metric: took 3.969487931s to extract preloaded images to volume ...
	W1209 04:17:16.233896 1581510 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 04:17:16.234033 1581510 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 04:17:16.287338 1581510 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-377526 --name addons-377526 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377526 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-377526 --network addons-377526 --ip 192.168.49.2 --volume addons-377526:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 04:17:16.566171 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Running}}
	I1209 04:17:16.584420 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:16.610127 1581510 cli_runner.go:164] Run: docker exec addons-377526 stat /var/lib/dpkg/alternatives/iptables
	I1209 04:17:16.666937 1581510 oci.go:144] the created container "addons-377526" has a running status.
	I1209 04:17:16.666965 1581510 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa...
	I1209 04:17:17.604499 1581510 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 04:17:17.623897 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:17.640944 1581510 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 04:17:17.640969 1581510 kic_runner.go:114] Args: [docker exec --privileged addons-377526 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 04:17:17.678901 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:17.695788 1581510 machine.go:94] provisionDockerMachine start ...
	I1209 04:17:17.695894 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:17.712601 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:17.712954 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:17.712971 1581510 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:17:17.713553 1581510 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50438->127.0.0.1:34240: read: connection reset by peer
	I1209 04:17:20.866085 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377526
	
	I1209 04:17:20.866118 1581510 ubuntu.go:182] provisioning hostname "addons-377526"
	I1209 04:17:20.866182 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:20.884453 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:20.884776 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:20.884787 1581510 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377526 && echo "addons-377526" | sudo tee /etc/hostname
	I1209 04:17:21.044929 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377526
	
	I1209 04:17:21.045028 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:21.063041 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:21.063367 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:21.063391 1581510 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377526/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:17:21.218967 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:17:21.219036 1581510 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:17:21.219068 1581510 ubuntu.go:190] setting up certificates
	I1209 04:17:21.219086 1581510 provision.go:84] configureAuth start
	I1209 04:17:21.219158 1581510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377526
	I1209 04:17:21.236274 1581510 provision.go:143] copyHostCerts
	I1209 04:17:21.236367 1581510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:17:21.236488 1581510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:17:21.236559 1581510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:17:21.236619 1581510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.addons-377526 san=[127.0.0.1 192.168.49.2 addons-377526 localhost minikube]
	I1209 04:17:21.622818 1581510 provision.go:177] copyRemoteCerts
	I1209 04:17:21.622892 1581510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:17:21.622935 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:21.639573 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:21.746296 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:17:21.763251 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 04:17:21.780433 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:17:21.797477 1581510 provision.go:87] duration metric: took 578.368257ms to configureAuth
	I1209 04:17:21.797508 1581510 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:17:21.797698 1581510 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:17:21.797812 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:21.814355 1581510 main.go:143] libmachine: Using SSH client type: native
	I1209 04:17:21.814705 1581510 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34240 <nil> <nil>}
	I1209 04:17:21.814728 1581510 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:17:22.434769 1581510 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:17:22.434795 1581510 machine.go:97] duration metric: took 4.73898637s to provisionDockerMachine
	I1209 04:17:22.434807 1581510 client.go:176] duration metric: took 11.572593166s to LocalClient.Create
	I1209 04:17:22.434832 1581510 start.go:167] duration metric: took 11.572661212s to libmachine.API.Create "addons-377526"
	I1209 04:17:22.434843 1581510 start.go:293] postStartSetup for "addons-377526" (driver="docker")
	I1209 04:17:22.434852 1581510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:17:22.434944 1581510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:17:22.435005 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.451622 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.558554 1581510 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:17:22.561787 1581510 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:17:22.561817 1581510 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:17:22.561833 1581510 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:17:22.561897 1581510 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:17:22.561925 1581510 start.go:296] duration metric: took 127.076955ms for postStartSetup
	I1209 04:17:22.562230 1581510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377526
	I1209 04:17:22.578897 1581510 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/config.json ...
	I1209 04:17:22.579200 1581510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:17:22.579259 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.595362 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.699580 1581510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:17:22.704107 1581510 start.go:128] duration metric: took 11.847324258s to createHost
	I1209 04:17:22.704179 1581510 start.go:83] releasing machines lock for "addons-377526", held for 11.847510728s
	I1209 04:17:22.704279 1581510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377526
	I1209 04:17:22.721398 1581510 ssh_runner.go:195] Run: cat /version.json
	I1209 04:17:22.721448 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.721676 1581510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:17:22.721743 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:22.739696 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.750624 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:22.932931 1581510 ssh_runner.go:195] Run: systemctl --version
	I1209 04:17:22.939424 1581510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:17:22.980356 1581510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:17:22.984855 1581510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:17:22.984979 1581510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:17:23.017345 1581510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 04:17:23.017374 1581510 start.go:496] detecting cgroup driver to use...
	I1209 04:17:23.017409 1581510 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:17:23.017475 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:17:23.035963 1581510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:17:23.048921 1581510 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:17:23.048990 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:17:23.066802 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:17:23.085903 1581510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:17:23.207123 1581510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:17:23.325166 1581510 docker.go:234] disabling docker service ...
	I1209 04:17:23.325307 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:17:23.346866 1581510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:17:23.360121 1581510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:17:23.473271 1581510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:17:23.581520 1581510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:17:23.594275 1581510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:17:23.607562 1581510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:17:23.607657 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.616587 1581510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:17:23.616691 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.626168 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.634723 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.643810 1581510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:17:23.652128 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.661212 1581510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.674712 1581510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:17:23.683477 1581510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:17:23.691323 1581510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:17:23.698795 1581510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:17:23.804736 1581510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:17:23.973117 1581510 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:17:23.973253 1581510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:17:23.977141 1581510 start.go:564] Will wait 60s for crictl version
	I1209 04:17:23.977235 1581510 ssh_runner.go:195] Run: which crictl
	I1209 04:17:23.980740 1581510 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:17:24.023318 1581510 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:17:24.023502 1581510 ssh_runner.go:195] Run: crio --version
	I1209 04:17:24.056606 1581510 ssh_runner.go:195] Run: crio --version
	I1209 04:17:24.090953 1581510 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 04:17:24.093932 1581510 cli_runner.go:164] Run: docker network inspect addons-377526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:17:24.110133 1581510 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:17:24.114282 1581510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:17:24.124705 1581510 kubeadm.go:884] updating cluster {Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:17:24.124837 1581510 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:17:24.124894 1581510 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:17:24.172493 1581510 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:17:24.172518 1581510 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:17:24.172575 1581510 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:17:24.198094 1581510 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:17:24.198119 1581510 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:17:24.198128 1581510 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1209 04:17:24.198214 1581510 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-377526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:17:24.198301 1581510 ssh_runner.go:195] Run: crio config
	I1209 04:17:24.262764 1581510 cni.go:84] Creating CNI manager for ""
	I1209 04:17:24.262793 1581510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:17:24.262838 1581510 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:17:24.262868 1581510 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377526 NodeName:addons-377526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:17:24.263010 1581510 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:17:24.263087 1581510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 04:17:24.271000 1581510 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:17:24.271069 1581510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:17:24.278844 1581510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 04:17:24.292188 1581510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 04:17:24.304754 1581510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1209 04:17:24.317580 1581510 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:17:24.321227 1581510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:17:24.331065 1581510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:17:24.459901 1581510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:17:24.478161 1581510 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526 for IP: 192.168.49.2
	I1209 04:17:24.478235 1581510 certs.go:195] generating shared ca certs ...
	I1209 04:17:24.478273 1581510 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.478454 1581510 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:17:24.582108 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt ...
	I1209 04:17:24.582139 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt: {Name:mk3a1918fa927ff9d32540da018f7eefbfc4b54b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.582340 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key ...
	I1209 04:17:24.582355 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key: {Name:mk464efebeae6480718a4aefc3e662e3af96267f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.582468 1581510 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:17:24.732777 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt ...
	I1209 04:17:24.732808 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt: {Name:mkf78f9fc0de3e89a151cf75e195ecd99b1990fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.732984 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key ...
	I1209 04:17:24.732999 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key: {Name:mk0ad0fe979156209211c3c09aef76eb323713c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:24.733080 1581510 certs.go:257] generating profile certs ...
	I1209 04:17:24.733147 1581510 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.key
	I1209 04:17:24.733169 1581510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt with IP's: []
	I1209 04:17:25.010284 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt ...
	I1209 04:17:25.010323 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: {Name:mkc865236fad47470fd38078b1a8f35f9a1112a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.010524 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.key ...
	I1209 04:17:25.010538 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.key: {Name:mkca0decbe07d7184d011490eceb71932eccdd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.010648 1581510 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b
	I1209 04:17:25.010675 1581510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 04:17:25.348310 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b ...
	I1209 04:17:25.348344 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b: {Name:mk9672d7335ff226422c27cf73be60bb20f6b19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.348523 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b ...
	I1209 04:17:25.348541 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b: {Name:mkf8b61c27c6b290b98491b354fe8bf17c07e2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.348617 1581510 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt.bf3f738b -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt
	I1209 04:17:25.348716 1581510 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key.bf3f738b -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key
	I1209 04:17:25.348774 1581510 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key
	I1209 04:17:25.348797 1581510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt with IP's: []
	I1209 04:17:25.502270 1581510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt ...
	I1209 04:17:25.502301 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt: {Name:mk6a935916017e206f3bcc29fe39cbf396348f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.502495 1581510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key ...
	I1209 04:17:25.502532 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key: {Name:mkf5ed0c32959705b0f222b7088fadea8a48b8e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:25.502745 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:17:25.502792 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:17:25.502824 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:17:25.502860 1581510 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:17:25.503436 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:17:25.524980 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:17:25.543194 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:17:25.560875 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:17:25.578702 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 04:17:25.596972 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:17:25.614666 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:17:25.632620 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:17:25.650560 1581510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:17:25.668671 1581510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:17:25.681305 1581510 ssh_runner.go:195] Run: openssl version
	I1209 04:17:25.687727 1581510 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.695727 1581510 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:17:25.703405 1581510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.707256 1581510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.707329 1581510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:17:25.753764 1581510 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:17:25.761498 1581510 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 04:17:25.769034 1581510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:17:25.772900 1581510 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 04:17:25.772953 1581510 kubeadm.go:401] StartCluster: {Name:addons-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:17:25.773031 1581510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:17:25.773107 1581510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:17:25.802926 1581510 cri.go:89] found id: ""
	I1209 04:17:25.803043 1581510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:17:25.811233 1581510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:17:25.819148 1581510 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:17:25.819214 1581510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:17:25.826947 1581510 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:17:25.826969 1581510 kubeadm.go:158] found existing configuration files:
	
	I1209 04:17:25.827049 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 04:17:25.835323 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:17:25.835419 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:17:25.844218 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 04:17:25.852387 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:17:25.852486 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:17:25.860356 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 04:17:25.868441 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:17:25.868540 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:17:25.876243 1581510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 04:17:25.884510 1581510 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:17:25.884602 1581510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:17:25.892444 1581510 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:17:25.930192 1581510 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 04:17:25.930501 1581510 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:17:25.955176 1581510 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:17:25.955252 1581510 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:17:25.955293 1581510 kubeadm.go:319] OS: Linux
	I1209 04:17:25.955345 1581510 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:17:25.955398 1581510 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:17:25.955449 1581510 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:17:25.955501 1581510 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:17:25.955552 1581510 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:17:25.955604 1581510 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:17:25.955655 1581510 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:17:25.955711 1581510 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:17:25.955761 1581510 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:17:26.036977 1581510 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:17:26.037100 1581510 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:17:26.037198 1581510 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:17:26.047029 1581510 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:17:26.050852 1581510 out.go:252]   - Generating certificates and keys ...
	I1209 04:17:26.050952 1581510 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:17:26.051025 1581510 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:17:26.320103 1581510 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 04:17:26.846351 1581510 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 04:17:27.470021 1581510 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 04:17:27.921121 1581510 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 04:17:28.611524 1581510 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 04:17:28.611672 1581510 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:17:29.386222 1581510 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 04:17:29.386597 1581510 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:17:29.851314 1581510 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 04:17:30.444407 1581510 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 04:17:30.699323 1581510 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 04:17:30.699608 1581510 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:17:31.020651 1581510 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:17:31.282136 1581510 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:17:31.538792 1581510 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:17:31.924035 1581510 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:17:32.066703 1581510 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:17:32.067461 1581510 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:17:32.070283 1581510 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:17:32.073773 1581510 out.go:252]   - Booting up control plane ...
	I1209 04:17:32.073894 1581510 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:17:32.083212 1581510 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:17:32.083294 1581510 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:17:32.101131 1581510 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:17:32.101256 1581510 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:17:32.109804 1581510 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:17:32.110195 1581510 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:17:32.110427 1581510 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:17:32.248104 1581510 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:17:32.248238 1581510 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:17:33.248504 1581510 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000859879s
	I1209 04:17:33.253293 1581510 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 04:17:33.253394 1581510 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1209 04:17:33.253483 1581510 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 04:17:33.253566 1581510 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 04:17:36.794701 1581510 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.541014639s
	I1209 04:17:38.484222 1581510 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.230843404s
	I1209 04:17:40.256090 1581510 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.002637701s
	I1209 04:17:40.290679 1581510 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 04:17:40.316217 1581510 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 04:17:40.357821 1581510 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 04:17:40.358297 1581510 kubeadm.go:319] [mark-control-plane] Marking the node addons-377526 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 04:17:40.371078 1581510 kubeadm.go:319] [bootstrap-token] Using token: lnu59a.k8lvbqwoiryzooup
	I1209 04:17:40.374022 1581510 out.go:252]   - Configuring RBAC rules ...
	I1209 04:17:40.374146 1581510 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 04:17:40.379060 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 04:17:40.389225 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 04:17:40.393812 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 04:17:40.398210 1581510 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 04:17:40.402763 1581510 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 04:17:40.664334 1581510 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 04:17:41.107274 1581510 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 04:17:41.663073 1581510 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 04:17:41.664072 1581510 kubeadm.go:319] 
	I1209 04:17:41.664144 1581510 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 04:17:41.664149 1581510 kubeadm.go:319] 
	I1209 04:17:41.664226 1581510 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 04:17:41.664231 1581510 kubeadm.go:319] 
	I1209 04:17:41.664261 1581510 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 04:17:41.664332 1581510 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 04:17:41.664383 1581510 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 04:17:41.664387 1581510 kubeadm.go:319] 
	I1209 04:17:41.664440 1581510 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 04:17:41.664444 1581510 kubeadm.go:319] 
	I1209 04:17:41.664491 1581510 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 04:17:41.664495 1581510 kubeadm.go:319] 
	I1209 04:17:41.664548 1581510 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 04:17:41.664623 1581510 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 04:17:41.664699 1581510 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 04:17:41.664704 1581510 kubeadm.go:319] 
	I1209 04:17:41.664788 1581510 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 04:17:41.664864 1581510 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 04:17:41.664868 1581510 kubeadm.go:319] 
	I1209 04:17:41.664954 1581510 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lnu59a.k8lvbqwoiryzooup \
	I1209 04:17:41.665057 1581510 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb \
	I1209 04:17:41.665077 1581510 kubeadm.go:319] 	--control-plane 
	I1209 04:17:41.665081 1581510 kubeadm.go:319] 
	I1209 04:17:41.665166 1581510 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 04:17:41.665170 1581510 kubeadm.go:319] 
	I1209 04:17:41.665252 1581510 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lnu59a.k8lvbqwoiryzooup \
	I1209 04:17:41.665354 1581510 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb 
	I1209 04:17:41.668049 1581510 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1209 04:17:41.668268 1581510 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:17:41.668371 1581510 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:17:41.668392 1581510 cni.go:84] Creating CNI manager for ""
	I1209 04:17:41.668400 1581510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:17:41.671506 1581510 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 04:17:41.674449 1581510 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 04:17:41.678300 1581510 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 04:17:41.678322 1581510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 04:17:41.692999 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 04:17:41.980288 1581510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 04:17:41.980443 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:41.980507 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377526 minikube.k8s.io/updated_at=2025_12_09T04_17_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-377526 minikube.k8s.io/primary=true
	I1209 04:17:42.250623 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:42.250757 1581510 ops.go:34] apiserver oom_adj: -16
	I1209 04:17:42.751378 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:43.250725 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:43.751149 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:44.251583 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:44.751376 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:45.250827 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:45.750890 1581510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:17:45.842157 1581510 kubeadm.go:1114] duration metric: took 3.86177558s to wait for elevateKubeSystemPrivileges
	I1209 04:17:45.842205 1581510 kubeadm.go:403] duration metric: took 20.069257415s to StartCluster
	I1209 04:17:45.842226 1581510 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:45.842356 1581510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:17:45.842782 1581510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:17:45.842989 1581510 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:17:45.843098 1581510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 04:17:45.843365 1581510 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:17:45.843401 1581510 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 04:17:45.843480 1581510 addons.go:70] Setting yakd=true in profile "addons-377526"
	I1209 04:17:45.843502 1581510 addons.go:239] Setting addon yakd=true in "addons-377526"
	I1209 04:17:45.843524 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.844010 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.844260 1581510 addons.go:70] Setting inspektor-gadget=true in profile "addons-377526"
	I1209 04:17:45.844282 1581510 addons.go:239] Setting addon inspektor-gadget=true in "addons-377526"
	I1209 04:17:45.844304 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.844737 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.845113 1581510 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377526"
	I1209 04:17:45.845137 1581510 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377526"
	I1209 04:17:45.845160 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.845583 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.846522 1581510 addons.go:70] Setting metrics-server=true in profile "addons-377526"
	I1209 04:17:45.846551 1581510 addons.go:239] Setting addon metrics-server=true in "addons-377526"
	I1209 04:17:45.846594 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.847015 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.853508 1581510 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377526"
	I1209 04:17:45.853544 1581510 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377526"
	I1209 04:17:45.853583 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.854089 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.857511 1581510 addons.go:70] Setting cloud-spanner=true in profile "addons-377526"
	I1209 04:17:45.857583 1581510 addons.go:239] Setting addon cloud-spanner=true in "addons-377526"
	I1209 04:17:45.857641 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.860128 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.863781 1581510 addons.go:70] Setting registry=true in profile "addons-377526"
	I1209 04:17:45.863822 1581510 addons.go:239] Setting addon registry=true in "addons-377526"
	I1209 04:17:45.863874 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.864334 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.869965 1581510 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377526"
	I1209 04:17:45.870036 1581510 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377526"
	I1209 04:17:45.870067 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.870520 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.882350 1581510 addons.go:70] Setting registry-creds=true in profile "addons-377526"
	I1209 04:17:45.882404 1581510 addons.go:239] Setting addon registry-creds=true in "addons-377526"
	I1209 04:17:45.882448 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.882959 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.886707 1581510 addons.go:70] Setting default-storageclass=true in profile "addons-377526"
	I1209 04:17:45.886795 1581510 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377526"
	I1209 04:17:45.887197 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.898032 1581510 addons.go:70] Setting gcp-auth=true in profile "addons-377526"
	I1209 04:17:45.898081 1581510 mustload.go:66] Loading cluster: addons-377526
	I1209 04:17:45.898287 1581510 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:17:45.898621 1581510 addons.go:70] Setting storage-provisioner=true in profile "addons-377526"
	I1209 04:17:45.898642 1581510 addons.go:239] Setting addon storage-provisioner=true in "addons-377526"
	I1209 04:17:45.898669 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.899043 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.904285 1581510 addons.go:70] Setting ingress=true in profile "addons-377526"
	I1209 04:17:45.904375 1581510 addons.go:239] Setting addon ingress=true in "addons-377526"
	I1209 04:17:45.904448 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.904967 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.908213 1581510 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377526"
	I1209 04:17:45.908253 1581510 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377526"
	I1209 04:17:45.908584 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.918401 1581510 addons.go:70] Setting ingress-dns=true in profile "addons-377526"
	I1209 04:17:45.918436 1581510 addons.go:239] Setting addon ingress-dns=true in "addons-377526"
	I1209 04:17:45.918478 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.918978 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.920595 1581510 addons.go:70] Setting volcano=true in profile "addons-377526"
	I1209 04:17:45.920622 1581510 addons.go:239] Setting addon volcano=true in "addons-377526"
	I1209 04:17:45.920666 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.921120 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.933267 1581510 out.go:179] * Verifying Kubernetes components...
	I1209 04:17:45.937532 1581510 addons.go:70] Setting volumesnapshots=true in profile "addons-377526"
	I1209 04:17:45.937573 1581510 addons.go:239] Setting addon volumesnapshots=true in "addons-377526"
	I1209 04:17:45.937608 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:45.938082 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:45.973528 1581510 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1209 04:17:45.976489 1581510 out.go:179]   - Using image docker.io/registry:3.0.0
	I1209 04:17:45.979466 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 04:17:45.979496 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 04:17:45.979565 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:45.991147 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:46.034012 1581510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:17:46.056611 1581510 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377526"
	I1209 04:17:46.056723 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:46.057058 1581510 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 04:17:46.062190 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 04:17:46.062266 1581510 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 04:17:46.062406 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.085068 1581510 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1209 04:17:46.095049 1581510 addons.go:239] Setting addon default-storageclass=true in "addons-377526"
	I1209 04:17:46.095157 1581510 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:17:46.095169 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:46.095997 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:46.127766 1581510 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 04:17:46.132098 1581510 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 04:17:46.132122 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 04:17:46.132189 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.132840 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 04:17:46.136233 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1209 04:17:46.142175 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 04:17:46.147186 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:46.147963 1581510 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1209 04:17:46.148255 1581510 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 04:17:46.148297 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 04:17:46.148383 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.148672 1581510 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1209 04:17:46.148936 1581510 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 04:17:46.148948 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1209 04:17:46.149001 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.175823 1581510 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1209 04:17:46.180777 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 04:17:46.180838 1581510 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 04:17:46.180921 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.194967 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 04:17:46.194987 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1209 04:17:46.195052 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.202257 1581510 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1209 04:17:46.206706 1581510 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:17:46.206738 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:17:46.206812 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.212661 1581510 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 04:17:46.212684 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 04:17:46.212765 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.259327 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	W1209 04:17:46.260296 1581510 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 04:17:46.263621 1581510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 04:17:46.263792 1581510 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1209 04:17:46.263802 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 04:17:46.263891 1581510 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1209 04:17:46.264429 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:46.266001 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 04:17:46.266293 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.292449 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 04:17:46.292506 1581510 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 04:17:46.292618 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.266314 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 04:17:46.266544 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.301358 1581510 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 04:17:46.301389 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1209 04:17:46.301446 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.328918 1581510 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 04:17:46.332420 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 04:17:46.341380 1581510 out.go:179]   - Using image docker.io/busybox:stable
	I1209 04:17:46.376206 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.377259 1581510 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 04:17:46.377278 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 04:17:46.377362 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.375798 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 04:17:46.383751 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 04:17:46.384321 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.392554 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 04:17:46.397668 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 04:17:46.402639 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 04:17:46.406516 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.412009 1581510 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 04:17:46.414800 1581510 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:17:46.414822 1581510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:17:46.414900 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.417567 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 04:17:46.417597 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 04:17:46.417677 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:46.430912 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.434909 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.440552 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.455089 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.495356 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.518380 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.524472 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.530839 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.533805 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	W1209 04:17:46.536197 1581510 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 04:17:46.536237 1581510 retry.go:31] will retry after 286.932654ms: ssh: handshake failed: EOF
	I1209 04:17:46.577445 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:46.583554 1581510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1209 04:17:46.828880 1581510 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1209 04:17:46.828909 1581510 retry.go:31] will retry after 224.110959ms: ssh: handshake failed: EOF
	I1209 04:17:47.011009 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 04:17:47.011039 1581510 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 04:17:47.157344 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 04:17:47.183740 1581510 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 04:17:47.183803 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 04:17:47.224248 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 04:17:47.233916 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:17:47.316756 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 04:17:47.320991 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:17:47.367965 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1209 04:17:47.369483 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 04:17:47.369520 1581510 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 04:17:47.372747 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 04:17:47.378054 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 04:17:47.400196 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 04:17:47.537859 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 04:17:47.537930 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 04:17:47.609350 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 04:17:47.616231 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 04:17:47.616314 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 04:17:47.619193 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 04:17:47.619265 1581510 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 04:17:47.679327 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 04:17:47.727897 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 04:17:47.727964 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 04:17:47.774140 1581510 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 04:17:47.774215 1581510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 04:17:47.911542 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 04:17:47.911615 1581510 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 04:17:47.925806 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 04:17:47.925872 1581510 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 04:17:47.927211 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 04:17:47.927266 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 04:17:47.942304 1581510 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 04:17:47.942376 1581510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 04:17:48.106131 1581510 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 04:17:48.106209 1581510 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 04:17:48.125039 1581510 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 04:17:48.125105 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 04:17:48.288587 1581510 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 04:17:48.288668 1581510 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 04:17:48.338700 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 04:17:48.372149 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 04:17:48.372230 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 04:17:48.384794 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 04:17:48.384858 1581510 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 04:17:48.450383 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 04:17:48.603562 1581510 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 04:17:48.603629 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 04:17:48.641377 1581510 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 04:17:48.641460 1581510 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 04:17:48.707085 1581510 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.443431045s)
	I1209 04:17:48.707173 1581510 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.123551536s)
	I1209 04:17:48.707976 1581510 node_ready.go:35] waiting up to 6m0s for node "addons-377526" to be "Ready" ...
	I1209 04:17:48.708204 1581510 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 04:17:48.854245 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 04:17:48.854307 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 04:17:48.982680 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 04:17:49.102498 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 04:17:49.102566 1581510 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 04:17:49.215865 1581510 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377526" context rescaled to 1 replicas
	I1209 04:17:49.253147 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 04:17:49.253168 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 04:17:49.404067 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 04:17:49.404087 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 04:17:49.570071 1581510 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 04:17:49.570139 1581510 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 04:17:49.781579 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1209 04:17:50.722197 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:51.090793 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.933371324s)
	I1209 04:17:52.401286 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.176961955s)
	I1209 04:17:52.401361 1581510 addons.go:495] Verifying addon ingress=true in "addons-377526"
	I1209 04:17:52.401544 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.167597423s)
	I1209 04:17:52.401838 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.085047294s)
	I1209 04:17:52.401864 1581510 addons.go:495] Verifying addon registry=true in "addons-377526"
	I1209 04:17:52.401893 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.080872992s)
	I1209 04:17:52.401947 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.033957564s)
	I1209 04:17:52.401980 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.029210399s)
	I1209 04:17:52.402021 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.023936697s)
	I1209 04:17:52.402066 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.001847247s)
	I1209 04:17:52.402257 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.79283311s)
	I1209 04:17:52.402306 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.722907706s)
	I1209 04:17:52.402370 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.063602113s)
	I1209 04:17:52.402474 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.952017777s)
	I1209 04:17:52.402489 1581510 addons.go:495] Verifying addon metrics-server=true in "addons-377526"
	I1209 04:17:52.402565 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.419815962s)
	W1209 04:17:52.402605 1581510 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 04:17:52.402628 1581510 retry.go:31] will retry after 208.74496ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 04:17:52.404912 1581510 out.go:179] * Verifying registry addon...
	I1209 04:17:52.405015 1581510 out.go:179] * Verifying ingress addon...
	I1209 04:17:52.406956 1581510 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377526 service yakd-dashboard -n yakd-dashboard
	
	I1209 04:17:52.409558 1581510 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 04:17:52.409558 1581510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 04:17:52.420726 1581510 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 04:17:52.420751 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:52.423975 1581510 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 04:17:52.423999 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:17:52.436947 1581510 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 04:17:52.612044 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 04:17:52.837647 1581510 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.055975014s)
	I1209 04:17:52.837738 1581510 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-377526"
	I1209 04:17:52.842853 1581510 out.go:179] * Verifying csi-hostpath-driver addon...
	I1209 04:17:52.846656 1581510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 04:17:52.853000 1581510 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 04:17:52.853027 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:52.914829 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:52.915216 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:17:53.211662 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:53.351223 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:53.414969 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:53.415352 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:53.850470 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:53.912777 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:53.913232 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:54.037411 1581510 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 04:17:54.037502 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:54.057952 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:54.187237 1581510 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 04:17:54.199896 1581510 addons.go:239] Setting addon gcp-auth=true in "addons-377526"
	I1209 04:17:54.199948 1581510 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:17:54.200404 1581510 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:17:54.219781 1581510 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 04:17:54.219842 1581510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:17:54.236843 1581510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:17:54.341383 1581510 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1209 04:17:54.344176 1581510 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 04:17:54.347015 1581510 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 04:17:54.347036 1581510 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 04:17:54.351013 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:54.363041 1581510 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 04:17:54.363065 1581510 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 04:17:54.376513 1581510 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 04:17:54.376535 1581510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 04:17:54.389338 1581510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 04:17:54.415421 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:54.415906 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:54.857959 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:54.881296 1581510 addons.go:495] Verifying addon gcp-auth=true in "addons-377526"
	I1209 04:17:54.883527 1581510 out.go:179] * Verifying gcp-auth addon...
	I1209 04:17:54.887487 1581510 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 04:17:54.953948 1581510 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 04:17:54.953976 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:54.954096 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:54.954319 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:55.350710 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:55.390563 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:55.413063 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:55.413310 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:17:55.711463 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:55.850556 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:55.890618 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:55.913593 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:55.913743 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:56.350317 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:56.391204 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:56.413520 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:56.414005 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:56.849842 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:56.891019 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:56.913491 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:56.913571 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:57.350141 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:57.391211 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:57.413255 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:57.413499 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:57.851021 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:57.890911 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:57.913335 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:57.914356 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:17:58.211554 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:17:58.349623 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:58.390299 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:58.413678 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:58.413971 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:58.850847 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:58.891082 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:58.913349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:58.914017 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:59.350736 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:59.390713 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:59.412583 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:17:59.413242 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:59.850529 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:17:59.890611 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:17:59.913755 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:17:59.914326 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:18:00.221705 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:00.351772 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:00.391355 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:00.413973 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:00.414093 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:00.851255 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:00.891047 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:00.913068 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:00.913295 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:01.350378 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:01.400078 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:01.420708 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:01.421398 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:01.849909 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:01.890877 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:01.913245 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:01.913792 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:02.350617 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:02.390826 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:02.412771 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:02.413098 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:02.711015 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:02.850549 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:02.891187 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:02.913316 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:02.914126 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:03.350360 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:03.391235 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:03.413096 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:03.413667 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:03.849594 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:03.891414 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:03.913524 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:03.913726 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:04.350019 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:04.390870 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:04.413083 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:04.413361 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:18:04.711530 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:04.850683 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:04.890403 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:04.913821 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:04.913962 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:05.350727 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:05.390408 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:05.413575 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:05.414061 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:05.851464 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:05.891350 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:05.914954 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:05.915514 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:06.350307 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:06.391652 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:06.413540 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:06.413958 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:06.850218 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:06.891619 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:06.913395 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:06.913761 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:07.210858 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:07.350121 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:07.390930 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:07.412884 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:07.413860 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:07.849997 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:07.890534 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:07.912606 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:07.912748 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:08.350017 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:08.390761 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:08.412741 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:08.413106 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:08.849895 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:08.890909 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:08.913661 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:08.913930 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:09.211158 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:09.350230 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:09.391238 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:09.413196 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:09.413356 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:09.850984 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:09.890685 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:09.913295 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:09.913450 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:10.350664 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:10.390598 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:10.413695 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:10.414053 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:10.850490 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:10.890479 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:10.913582 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:10.913764 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:11.350691 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:11.390621 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:11.413854 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:11.414091 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:11.711260 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:11.852130 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:11.891217 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:11.913073 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:11.913228 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:12.350324 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:12.391131 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:12.413197 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:12.413341 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:12.850256 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:12.891140 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:12.913556 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:12.913834 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:13.350233 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:13.391232 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:13.413644 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:13.413906 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:13.850875 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:13.890430 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:13.913885 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:13.913958 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1209 04:18:14.211095 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:14.350034 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:14.391055 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:14.413235 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:14.413348 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:14.849979 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:14.890805 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:14.912822 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:14.912958 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:15.350112 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:15.390917 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:15.413104 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:15.413503 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:15.851707 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:15.890863 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:15.913465 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:15.913582 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:16.350137 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:16.391007 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:16.413225 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:16.413330 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:16.711236 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:16.851058 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:16.890707 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:16.912661 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:16.913021 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:17.349906 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:17.390680 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:17.412773 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:17.413061 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:17.851066 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:17.890788 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:17.912820 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:17.913071 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:18.349752 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:18.390611 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:18.413689 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:18.413895 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:18.850844 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:18.891042 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:18.913081 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:18.913512 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:19.212157 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:19.350590 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:19.391046 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:19.414766 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:19.414996 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:19.849911 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:19.891342 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:19.914454 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:19.914986 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:20.349650 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:20.390693 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:20.412521 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:20.413021 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:20.851389 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:20.891342 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:20.913530 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:20.913723 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:21.350285 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:21.391203 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:21.413567 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:21.413946 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:21.710776 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:21.850393 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:21.891246 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:21.913128 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:21.913321 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:22.349798 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:22.390827 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:22.413271 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:22.413643 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:22.849642 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:22.890816 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:22.913177 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:22.913254 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:23.350185 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:23.390893 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:23.413013 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:23.413190 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:23.711175 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:23.850795 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:23.890684 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:23.912662 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:23.912880 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:24.350375 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:24.391490 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:24.413382 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:24.413733 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:24.849801 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:24.890413 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:24.914022 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:24.914299 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:25.349965 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:25.390781 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:25.412960 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:25.413283 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1209 04:18:25.711656 1581510 node_ready.go:57] node "addons-377526" has "Ready":"False" status (will retry)
	I1209 04:18:25.849982 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:25.890949 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:25.913336 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:25.913392 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:26.350050 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:26.391093 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:26.413283 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:26.413739 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:26.849713 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:26.893549 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:26.912582 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:26.913000 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:27.349756 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:27.390759 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:27.412562 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:27.412819 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:27.849508 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:27.890340 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:27.913690 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:27.913755 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:28.228189 1581510 node_ready.go:49] node "addons-377526" is "Ready"
	I1209 04:18:28.228240 1581510 node_ready.go:38] duration metric: took 39.520237725s for node "addons-377526" to be "Ready" ...
	I1209 04:18:28.228269 1581510 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:18:28.228335 1581510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:18:28.244351 1581510 api_server.go:72] duration metric: took 42.401322922s to wait for apiserver process to appear ...
	I1209 04:18:28.244378 1581510 api_server.go:88] waiting for apiserver healthz status ...
	I1209 04:18:28.244398 1581510 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 04:18:28.259307 1581510 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 04:18:28.260868 1581510 api_server.go:141] control plane version: v1.34.2
	I1209 04:18:28.260899 1581510 api_server.go:131] duration metric: took 16.514257ms to wait for apiserver health ...
	I1209 04:18:28.260921 1581510 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 04:18:28.272678 1581510 system_pods.go:59] 19 kube-system pods found
	I1209 04:18:28.272766 1581510 system_pods.go:61] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending
	I1209 04:18:28.272787 1581510 system_pods.go:61] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:28.272808 1581510 system_pods.go:61] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending
	I1209 04:18:28.272840 1581510 system_pods.go:61] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending
	I1209 04:18:28.272865 1581510 system_pods.go:61] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:28.272885 1581510 system_pods.go:61] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:28.272906 1581510 system_pods.go:61] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:28.272926 1581510 system_pods.go:61] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:28.272954 1581510 system_pods.go:61] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:28.272978 1581510 system_pods.go:61] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:28.272998 1581510 system_pods.go:61] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:28.273022 1581510 system_pods.go:61] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:28.273041 1581510 system_pods.go:61] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending
	I1209 04:18:28.273077 1581510 system_pods.go:61] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending
	I1209 04:18:28.273096 1581510 system_pods.go:61] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending
	I1209 04:18:28.273115 1581510 system_pods.go:61] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending
	I1209 04:18:28.273135 1581510 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending
	I1209 04:18:28.273166 1581510 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending
	I1209 04:18:28.273191 1581510 system_pods.go:61] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending
	I1209 04:18:28.273212 1581510 system_pods.go:74] duration metric: took 12.284274ms to wait for pod list to return data ...
	I1209 04:18:28.273234 1581510 default_sa.go:34] waiting for default service account to be created ...
	I1209 04:18:28.314073 1581510 default_sa.go:45] found service account: "default"
	I1209 04:18:28.314111 1581510 default_sa.go:55] duration metric: took 40.856873ms for default service account to be created ...
	I1209 04:18:28.314123 1581510 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 04:18:28.452461 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:28.452717 1581510 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 04:18:28.452796 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:28.453815 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:28.454847 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:28.454911 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending
	I1209 04:18:28.454932 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:28.454953 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending
	I1209 04:18:28.454993 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending
	I1209 04:18:28.455018 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:28.455041 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:28.455080 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:28.455104 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:28.455127 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:28.455167 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:28.455194 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:28.455219 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:28.455256 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending
	I1209 04:18:28.455286 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:28.455307 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending
	I1209 04:18:28.455349 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending
	I1209 04:18:28.455371 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending
	I1209 04:18:28.455395 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending
	I1209 04:18:28.455446 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending
	I1209 04:18:28.455490 1581510 retry.go:31] will retry after 227.613254ms: missing components: kube-dns
	I1209 04:18:28.455951 1581510 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 04:18:28.456001 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:28.688540 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:28.688640 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 04:18:28.688676 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:28.688724 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending
	I1209 04:18:28.688743 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending
	I1209 04:18:28.688779 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:28.688802 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:28.688822 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:28.688858 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:28.688883 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:28.688912 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:28.688964 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:28.688997 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:28.689045 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending
	I1209 04:18:28.689069 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:28.689090 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 04:18:28.689125 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending
	I1209 04:18:28.689151 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending
	I1209 04:18:28.689171 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending
	I1209 04:18:28.689213 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 04:18:28.689252 1581510 retry.go:31] will retry after 309.414535ms: missing components: kube-dns
	I1209 04:18:28.864133 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:28.938349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:29.020637 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:29.024271 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:29.107928 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:29.108015 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 04:18:29.108038 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending
	I1209 04:18:29.108079 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 04:18:29.108104 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 04:18:29.108123 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:29.108159 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:29.108184 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:29.108205 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:29.108243 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending
	I1209 04:18:29.108267 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:29.108288 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:29.108329 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:29.108356 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 04:18:29.108380 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:29.108418 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 04:18:29.108449 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 04:18:29.108471 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.108507 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.108535 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1209 04:18:29.108579 1581510 retry.go:31] will retry after 341.198674ms: missing components: kube-dns
	I1209 04:18:29.353134 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:29.398149 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:29.414349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:29.424339 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:29.457139 1581510 system_pods.go:86] 19 kube-system pods found
	I1209 04:18:29.457219 1581510 system_pods.go:89] "coredns-66bc5c9577-rvbf9" [35948c37-785f-4aa1-9a5b-943c895a4a5c] Running
	I1209 04:18:29.457246 1581510 system_pods.go:89] "csi-hostpath-attacher-0" [e4b01171-b9b6-4022-9f41-57183f5d762b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 04:18:29.457269 1581510 system_pods.go:89] "csi-hostpath-resizer-0" [a0052ab4-7687-4065-80b6-f41145b70608] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 04:18:29.457327 1581510 system_pods.go:89] "csi-hostpathplugin-865n6" [ccf13813-b372-48b8-b02b-3ba9cffd5291] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 04:18:29.457346 1581510 system_pods.go:89] "etcd-addons-377526" [d54c5e9a-cbe9-487f-b06a-c8626b1d468b] Running
	I1209 04:18:29.457383 1581510 system_pods.go:89] "kindnet-whbx4" [314b6981-5dab-4d60-ad7b-4ee5fafa37fe] Running
	I1209 04:18:29.457407 1581510 system_pods.go:89] "kube-apiserver-addons-377526" [83c59d5f-df9b-4e36-a82e-53243caa5583] Running
	I1209 04:18:29.457428 1581510 system_pods.go:89] "kube-controller-manager-addons-377526" [34b73da0-2ba8-42fa-b9e5-6f64f0a71841] Running
	I1209 04:18:29.457465 1581510 system_pods.go:89] "kube-ingress-dns-minikube" [836995ce-e2ce-4f7c-bd3f-ecae6ed195a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1209 04:18:29.457488 1581510 system_pods.go:89] "kube-proxy-vrrb5" [c66f00d5-5347-4e19-8806-1d20162ad7ba] Running
	I1209 04:18:29.457511 1581510 system_pods.go:89] "kube-scheduler-addons-377526" [645ccb07-6b76-4a43-a7e2-94b2ce1d9004] Running
	I1209 04:18:29.457548 1581510 system_pods.go:89] "metrics-server-85b7d694d7-pckkq" [5cf1cd5f-cc2e-4169-947b-41b6e4c45a46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 04:18:29.457582 1581510 system_pods.go:89] "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1209 04:18:29.457621 1581510 system_pods.go:89] "registry-6b586f9694-pd2mr" [53908d34-a310-48b3-ae54-ebda566b420b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 04:18:29.457648 1581510 system_pods.go:89] "registry-creds-764b6fb674-hdrg9" [6de1311b-03a7-4949-9055-39d7b8dbf7fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1209 04:18:29.457670 1581510 system_pods.go:89] "registry-proxy-nlsrb" [2b82ec5a-1e98-4bd4-b422-b6fb23cad87c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 04:18:29.457713 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tx8sz" [b65548e1-7050-4896-b543-115dbcf7a7ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.457739 1581510 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zphwq" [d3462a77-4488-4807-8fb0-61c704574e50] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 04:18:29.457759 1581510 system_pods.go:89] "storage-provisioner" [e235f449-4cd1-4e82-b1b2-9ba59f81b5b0] Running
	I1209 04:18:29.457798 1581510 system_pods.go:126] duration metric: took 1.1436685s to wait for k8s-apps to be running ...
	I1209 04:18:29.457824 1581510 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 04:18:29.457946 1581510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:18:29.484181 1581510 system_svc.go:56] duration metric: took 26.349496ms WaitForService to wait for kubelet
	I1209 04:18:29.484279 1581510 kubeadm.go:587] duration metric: took 43.641254671s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:18:29.484331 1581510 node_conditions.go:102] verifying NodePressure condition ...
	I1209 04:18:29.494898 1581510 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 04:18:29.494976 1581510 node_conditions.go:123] node cpu capacity is 2
	I1209 04:18:29.495002 1581510 node_conditions.go:105] duration metric: took 10.649262ms to run NodePressure ...
	I1209 04:18:29.495041 1581510 start.go:242] waiting for startup goroutines ...
	I1209 04:18:29.852422 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:29.891810 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:29.914055 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:29.914132 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:30.351528 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:30.391565 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:30.413420 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:30.413903 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:30.851536 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:30.890443 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:30.914238 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:30.914416 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:31.351798 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:31.393386 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:31.414428 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:31.415241 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:31.850707 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:31.891045 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:31.914638 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:31.914801 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:32.350217 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:32.391515 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:32.414475 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:32.414647 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:32.854463 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:32.958227 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:32.958324 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:32.959783 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:33.351085 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:33.425353 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:33.437218 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:33.437349 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:33.852120 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:33.891624 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:33.915712 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:33.916281 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:34.350841 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:34.391412 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:34.417905 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:34.419244 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:34.855303 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:34.892416 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:34.915656 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:34.916265 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:35.354156 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:35.390941 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:35.415548 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:35.416066 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:35.851263 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:35.891580 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:35.914791 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:35.914928 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:36.350544 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:36.390714 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:36.414733 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:36.415180 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:36.851304 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:36.891467 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:36.915486 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:36.915869 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:37.350837 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:37.391830 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:37.414714 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:37.415092 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:37.851969 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:37.891401 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:37.916900 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:37.916998 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:38.349636 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:38.390413 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:38.414772 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:38.414995 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:38.850814 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:38.890871 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:38.914695 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:38.914909 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:39.351261 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:39.392226 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:39.414124 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:39.414273 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:39.850225 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:39.891341 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:39.915608 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:39.916305 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:40.351393 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:40.391202 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:40.414134 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:40.414219 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:40.852575 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:40.891447 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:40.914330 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:40.914415 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:41.351722 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:41.390684 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:41.413168 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:41.413355 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:41.850067 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:41.890674 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:41.913999 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:41.914134 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:42.351625 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:42.390947 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:42.413894 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:42.414051 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:42.851424 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:42.891288 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:42.915459 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:42.915931 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:43.351294 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:43.391422 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:43.416087 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:43.416596 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:43.850801 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:43.891376 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:43.915723 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:43.916263 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:44.351737 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:44.390845 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:44.415075 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:44.415732 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:44.850656 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:44.891032 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:44.914952 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:44.915519 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:45.351151 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:45.391401 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:45.415861 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:45.416558 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:45.851644 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:45.891265 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:45.914613 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:45.915133 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:46.351457 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:46.391568 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:46.414243 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:46.414511 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:46.850234 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:46.892340 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:46.915419 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:46.915945 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:47.352258 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:47.391604 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:47.416342 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:47.416760 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:47.851359 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:47.891235 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:47.913860 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:47.914118 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:48.350698 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:48.390843 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:48.414530 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:48.414736 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:48.850290 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:48.891383 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:48.914492 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:48.914774 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:49.349934 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:49.391963 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:49.414479 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:49.414551 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:49.851272 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:49.893030 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:49.917080 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:49.919504 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:50.350202 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:50.391992 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:50.415445 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:50.415818 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:50.860731 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:50.956832 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:50.957365 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:50.957847 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:51.351171 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:51.393398 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:51.417591 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:51.418163 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:51.854624 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:51.891642 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:51.915489 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:51.916046 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:52.351995 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:52.392010 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:52.416275 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:52.416789 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:52.852034 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:52.890858 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:52.913388 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:52.914624 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:53.351049 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:53.391652 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:53.452060 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:53.452179 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:53.851583 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:53.890975 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:53.915244 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:53.915810 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:54.350559 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:54.391328 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:54.415512 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:54.415648 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:54.851434 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:54.891546 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:54.914880 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:54.915111 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:55.350694 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:55.391331 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:55.413912 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:55.414191 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:55.851642 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:55.894956 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:55.914057 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:55.914203 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:56.350087 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:56.390923 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:56.414002 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:56.414209 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:56.852128 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:56.951528 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:56.951662 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:56.951711 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:57.350419 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:57.390477 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:57.412737 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:57.412874 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:57.850449 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:57.891575 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:57.913050 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:57.913192 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:58.350530 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:58.390174 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:58.414202 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:58.414377 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:58.851068 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:58.890652 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:58.913952 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:58.914101 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:59.351047 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:59.390708 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:59.413297 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:59.413528 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:18:59.850640 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:18:59.891092 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:18:59.915559 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:18:59.915964 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:00.351554 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:00.390931 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:00.415432 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:00.415913 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:00.851154 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:00.891231 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:00.914010 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:00.914916 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:01.350140 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:01.390882 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:01.414238 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:01.414939 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:01.851470 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:01.890624 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:01.914406 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:01.914932 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:02.351311 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:02.391865 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:02.415245 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:02.415452 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:02.851400 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:02.891697 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:02.913465 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:02.913665 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:03.351028 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:03.392380 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:03.452608 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:03.453008 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:03.851057 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:03.891256 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:03.914073 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:03.914145 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:04.351032 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:04.390926 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:04.413557 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:04.413726 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:04.850480 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:04.891081 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:04.913924 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:04.914075 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:05.350863 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:05.391278 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:05.414976 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:05.415143 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:05.850785 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:05.890785 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:05.914714 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:05.914800 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:06.350791 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:06.390706 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:06.413336 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 04:19:06.413487 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:06.850383 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:06.891278 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:06.916188 1581510 kapi.go:107] duration metric: took 1m14.506628499s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 04:19:06.916555 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:07.351773 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:07.390946 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:07.413349 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:07.851281 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:07.893223 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:07.953294 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:08.353216 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:08.391125 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:08.413030 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:08.851872 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:08.891199 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:08.913990 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:09.350736 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:09.390502 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:09.413005 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:09.851107 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:09.891249 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:09.914467 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:10.352371 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:10.390988 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:10.412938 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:10.854666 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:10.890850 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:10.913047 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:11.350617 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:11.391590 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:11.414174 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:11.850735 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:11.893615 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:11.913149 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:12.360877 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:12.390772 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:12.413240 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:12.851856 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:12.953268 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:12.953395 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:13.358892 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:13.391488 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:13.412588 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:13.850286 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:13.891659 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:13.912799 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:14.350807 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:14.391802 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:14.414137 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:14.852491 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:14.891510 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:14.913190 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:15.350953 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:15.390857 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:15.413166 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:15.851370 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:15.895950 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:15.915568 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:16.351009 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:16.451903 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 04:19:16.452093 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:16.851291 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:16.891170 1581510 kapi.go:107] duration metric: took 1m22.003682613s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 04:19:16.892302 1581510 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377526 cluster.
	I1209 04:19:16.893612 1581510 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 04:19:16.895505 1581510 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 04:19:16.913838 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:17.349800 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:17.412996 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:17.851771 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:17.913158 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:18.350737 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:18.413465 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:18.849827 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:18.913706 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:19.351265 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:19.413262 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:19.853140 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:19.913370 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:20.351420 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:20.413922 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:20.851201 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:20.914045 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:21.350745 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:21.412784 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:21.850088 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:21.913227 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:22.351345 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:22.414066 1581510 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 04:19:22.864838 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:22.913921 1581510 kapi.go:107] duration metric: took 1m30.504359541s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 04:19:23.351533 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:23.852375 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:24.405354 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:24.850258 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:25.352184 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:25.862121 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:26.351021 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:26.851388 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:27.351241 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:27.851514 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:28.350432 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:28.849929 1581510 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 04:19:29.351428 1581510 kapi.go:107] duration metric: took 1m36.504776027s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 04:19:29.355861 1581510 out.go:179] * Enabled addons: inspektor-gadget, storage-provisioner, registry-creds, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1209 04:19:29.359424 1581510 addons.go:530] duration metric: took 1m43.515374933s for enable addons: enabled=[inspektor-gadget storage-provisioner registry-creds cloud-spanner nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1209 04:19:29.359488 1581510 start.go:247] waiting for cluster config update ...
	I1209 04:19:29.359518 1581510 start.go:256] writing updated cluster config ...
	I1209 04:19:29.360948 1581510 ssh_runner.go:195] Run: rm -f paused
	I1209 04:19:29.367561 1581510 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 04:19:29.371352 1581510 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rvbf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.376865 1581510 pod_ready.go:94] pod "coredns-66bc5c9577-rvbf9" is "Ready"
	I1209 04:19:29.376896 1581510 pod_ready.go:86] duration metric: took 5.514131ms for pod "coredns-66bc5c9577-rvbf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.379068 1581510 pod_ready.go:83] waiting for pod "etcd-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.383892 1581510 pod_ready.go:94] pod "etcd-addons-377526" is "Ready"
	I1209 04:19:29.383918 1581510 pod_ready.go:86] duration metric: took 4.821906ms for pod "etcd-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.386115 1581510 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.391395 1581510 pod_ready.go:94] pod "kube-apiserver-addons-377526" is "Ready"
	I1209 04:19:29.391429 1581510 pod_ready.go:86] duration metric: took 5.281653ms for pod "kube-apiserver-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.393727 1581510 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.772147 1581510 pod_ready.go:94] pod "kube-controller-manager-addons-377526" is "Ready"
	I1209 04:19:29.772183 1581510 pod_ready.go:86] duration metric: took 378.431716ms for pod "kube-controller-manager-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:29.971715 1581510 pod_ready.go:83] waiting for pod "kube-proxy-vrrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.371795 1581510 pod_ready.go:94] pod "kube-proxy-vrrb5" is "Ready"
	I1209 04:19:30.371833 1581510 pod_ready.go:86] duration metric: took 400.092091ms for pod "kube-proxy-vrrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.572457 1581510 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.972046 1581510 pod_ready.go:94] pod "kube-scheduler-addons-377526" is "Ready"
	I1209 04:19:30.972075 1581510 pod_ready.go:86] duration metric: took 399.590859ms for pod "kube-scheduler-addons-377526" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 04:19:30.972087 1581510 pod_ready.go:40] duration metric: took 1.604494789s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 04:19:31.056572 1581510 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1209 04:19:31.059659 1581510 out.go:179] * Done! kubectl is now configured to use "addons-377526" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.189981438Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9653476c6ce35a90e796abb608b9c085986dd556825f4b5dee5f2253f3aa8e7 UID:57268e01-0d57-4108-a966-2bf34593e140 NetNS:/var/run/netns/b8856bf5-4b2d-4a07-aa05-321602d431ee Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001ed0688}] Aliases:map[]}"
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.190342968Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.194928129Z" level=info msg="Ran pod sandbox b9653476c6ce35a90e796abb608b9c085986dd556825f4b5dee5f2253f3aa8e7 with infra container: default/busybox/POD" id=683781f4-3b92-4ba6-99c4-0649b4806fda name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.196135854Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a9b2d104-837c-4b01-a81f-288afd748e3f name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.196367602Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a9b2d104-837c-4b01-a81f-288afd748e3f name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.196473859Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a9b2d104-837c-4b01-a81f-288afd748e3f name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.197766516Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=356063a7-4705-4f4f-a77d-f774f171a263 name=/runtime.v1.ImageService/PullImage
	Dec 09 04:19:32 addons-377526 crio[832]: time="2025-12-09T04:19:32.199616372Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.01833127Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=356063a7-4705-4f4f-a77d-f774f171a263 name=/runtime.v1.ImageService/PullImage
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.019242351Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0079d3cb-52cc-4cea-bee2-fb11893dcede name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.023171687Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a1854c9-3bce-40ed-bf3a-2c6c77c3bf98 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.031501263Z" level=info msg="Creating container: default/busybox/busybox" id=1bc6fd78-b102-467d-9c81-2354de7934c7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.031643164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.039484948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.040037421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.057795948Z" level=info msg="Created container 9e1023fff88d1c7d41365172a2eba36109b1725f75c52bceb7fce8eb7daf58ff: default/busybox/busybox" id=1bc6fd78-b102-467d-9c81-2354de7934c7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.058846945Z" level=info msg="Starting container: 9e1023fff88d1c7d41365172a2eba36109b1725f75c52bceb7fce8eb7daf58ff" id=e0512534-a525-403c-8fed-5aebe3da6231 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 04:19:34 addons-377526 crio[832]: time="2025-12-09T04:19:34.06127453Z" level=info msg="Started container" PID=4966 containerID=9e1023fff88d1c7d41365172a2eba36109b1725f75c52bceb7fce8eb7daf58ff description=default/busybox/busybox id=e0512534-a525-403c-8fed-5aebe3da6231 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9653476c6ce35a90e796abb608b9c085986dd556825f4b5dee5f2253f3aa8e7
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.107155418Z" level=info msg="Removing container: 46ec56406d657199d488bd6d2139f93ca58a128e19bcf1b7c63036fea80c323a" id=dff96592-1cd2-4df8-86df-008a550bded6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.110235219Z" level=info msg="Error loading conmon cgroup of container 46ec56406d657199d488bd6d2139f93ca58a128e19bcf1b7c63036fea80c323a: cgroup deleted" id=dff96592-1cd2-4df8-86df-008a550bded6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.116874177Z" level=info msg="Removed container 46ec56406d657199d488bd6d2139f93ca58a128e19bcf1b7c63036fea80c323a: gcp-auth/gcp-auth-certs-create-6pfk2/create" id=dff96592-1cd2-4df8-86df-008a550bded6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.120242112Z" level=info msg="Stopping pod sandbox: c808fc4e666bdafb03fffd56475c3192786da61931b776b9c1a31de3cfaf4323" id=121fc8d3-8f14-4d9e-9bd8-2387ac9eaa90 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.120435132Z" level=info msg="Stopped pod sandbox (already stopped): c808fc4e666bdafb03fffd56475c3192786da61931b776b9c1a31de3cfaf4323" id=121fc8d3-8f14-4d9e-9bd8-2387ac9eaa90 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.121020081Z" level=info msg="Removing pod sandbox: c808fc4e666bdafb03fffd56475c3192786da61931b776b9c1a31de3cfaf4323" id=c8ac8625-da1e-4d55-a622-ee2ca4fe7fe3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 04:19:41 addons-377526 crio[832]: time="2025-12-09T04:19:41.128829562Z" level=info msg="Removed pod sandbox: c808fc4e666bdafb03fffd56475c3192786da61931b776b9c1a31de3cfaf4323" id=c8ac8625-da1e-4d55-a622-ee2ca4fe7fe3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	9e1023fff88d1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   b9653476c6ce3       busybox                                     default
	c04442e39ef35       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	7aebdd3431a65       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	069b46a278cfb       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             17 seconds ago       Exited              patch                                    3                   6896fc22efa77       ingress-nginx-admission-patch-tj9l7         ingress-nginx
	18febaede59c2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	8cf0b6bd32f5b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           18 seconds ago       Running             hostpath                                 0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	878f8c9d656ac       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             20 seconds ago       Running             controller                               0                   ef16253316274       ingress-nginx-controller-85d4c799dd-m8q7w   ingress-nginx
	0cffce0029b0d       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             20 seconds ago       Exited              patch                                    3                   43e2aac5e6ece       gcp-auth-certs-patch-px7vt                  gcp-auth
	106f96ef6ff47       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 26 seconds ago       Running             gcp-auth                                 0                   fc72b833d53fc       gcp-auth-78565c9fb4-r5mtf                   gcp-auth
	174130d7501d2       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                29 seconds ago       Running             node-driver-registrar                    0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	b6aeff0b9c015       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            31 seconds ago       Running             gadget                                   0                   7ec9fc2954019       gadget-s4rfv                                gadget
	a69a96490b5ae       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              34 seconds ago       Running             csi-resizer                              0                   c262bb4bd18a7       csi-hostpath-resizer-0                      kube-system
	cc37fac1bc08a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              36 seconds ago       Running             registry-proxy                           0                   cafc9421507a5       registry-proxy-nlsrb                        kube-system
	9489ae99adda3       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     39 seconds ago       Running             nvidia-device-plugin-ctr                 0                   4d300aff6fa93       nvidia-device-plugin-daemonset-qpgbq        kube-system
	197524f2c1763       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   43 seconds ago       Running             csi-external-health-monitor-controller   0                   0aa207553e1a0       csi-hostpathplugin-865n6                    kube-system
	bb6380f4073cd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   45 seconds ago       Exited              create                                   0                   72773acb3780f       ingress-nginx-admission-create-stzpv        ingress-nginx
	18c00d5991fec       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              45 seconds ago       Running             yakd                                     0                   28b3b2771bf0d       yakd-dashboard-5ff678cb9-lzq6q              yakd-dashboard
	8d2bef8d89158       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      49 seconds ago       Running             volume-snapshot-controller               0                   607f4d61a1ea2       snapshot-controller-7d9fbc56b8-zphwq        kube-system
	e89fcd7e7a651       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             49 seconds ago       Running             csi-attacher                             0                   c4c12cc03fe0f       csi-hostpath-attacher-0                     kube-system
	9b3a8c868c3c9       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               51 seconds ago       Running             minikube-ingress-dns                     0                   18260dfb1d93f       kube-ingress-dns-minikube                   kube-system
	05aaaea2b06ae       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   3f6869f446610       local-path-provisioner-648f6765c9-zx7zg     local-path-storage
	d448cac096a04       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   de397074db7ec       snapshot-controller-7d9fbc56b8-tx8sz        kube-system
	a549d8652b346       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   92c76b38d4431       registry-6b586f9694-pd2mr                   kube-system
	9dce99afd0d08       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   5e6addc9258c8       cloud-spanner-emulator-5bdddb765-bg8ss      default
	365b8c540ac8b       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   9ac52df87dbe2       metrics-server-85b7d694d7-pckkq             kube-system
	ade186251b0b0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   8cf0a1c4ce903       storage-provisioner                         kube-system
	895a853e4aab3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   c5f1ddbb5e9e9       coredns-66bc5c9577-rvbf9                    kube-system
	3f583b93b3d82       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   07ed767d5928d       kube-proxy-vrrb5                            kube-system
	f23d383bb9010       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   1d2485236b7c6       kindnet-whbx4                               kube-system
	3e19f8eb0be86       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   d274abaf3b590       kube-apiserver-addons-377526                kube-system
	5f20869a412bb       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   dc35a8dbc37d6       kube-controller-manager-addons-377526       kube-system
	23444ddd657bb       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   e7bdf487e5df3       kube-scheduler-addons-377526                kube-system
	3d9befd5158d0       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   5753de0a77b15       etcd-addons-377526                          kube-system
	
	
	==> coredns [895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac] <==
	[INFO] 10.244.0.15:37659 - 58083 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084834s
	[INFO] 10.244.0.15:37659 - 23158 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002328655s
	[INFO] 10.244.0.15:37659 - 24991 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002434535s
	[INFO] 10.244.0.15:37659 - 6094 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000122414s
	[INFO] 10.244.0.15:37659 - 54089 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000164932s
	[INFO] 10.244.0.15:52128 - 63119 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168749s
	[INFO] 10.244.0.15:52128 - 62930 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104338s
	[INFO] 10.244.0.15:49491 - 51566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097248s
	[INFO] 10.244.0.15:49491 - 51386 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075587s
	[INFO] 10.244.0.15:43427 - 56565 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092711s
	[INFO] 10.244.0.15:43427 - 56128 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000191805s
	[INFO] 10.244.0.15:53242 - 29814 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001249998s
	[INFO] 10.244.0.15:53242 - 29601 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001398677s
	[INFO] 10.244.0.15:34182 - 31941 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142754s
	[INFO] 10.244.0.15:34182 - 31505 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174615s
	[INFO] 10.244.0.20:59914 - 8191 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154455s
	[INFO] 10.244.0.20:45159 - 24076 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000128584s
	[INFO] 10.244.0.20:46861 - 4752 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101474s
	[INFO] 10.244.0.20:43657 - 30263 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147144s
	[INFO] 10.244.0.20:46150 - 21464 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106381s
	[INFO] 10.244.0.20:34507 - 33229 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096863s
	[INFO] 10.244.0.20:47061 - 28902 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002729832s
	[INFO] 10.244.0.20:51136 - 64271 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003465068s
	[INFO] 10.244.0.20:44624 - 31626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000849485s
	[INFO] 10.244.0.20:42413 - 47439 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.00135518s
	
	
	==> describe nodes <==
	Name:               addons-377526
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-377526
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=addons-377526
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T04_17_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377526
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-377526"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 04:17:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377526
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 04:19:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 04:19:12 +0000   Tue, 09 Dec 2025 04:17:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 04:19:12 +0000   Tue, 09 Dec 2025 04:17:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 04:19:12 +0000   Tue, 09 Dec 2025 04:17:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 04:19:12 +0000   Tue, 09 Dec 2025 04:18:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-377526
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                da83b65b-98c5-4850-a34f-46ba26303299
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-bg8ss       0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gadget                      gadget-s4rfv                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  gcp-auth                    gcp-auth-78565c9fb4-r5mtf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-m8q7w    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         110s
	  kube-system                 coredns-66bc5c9577-rvbf9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpathplugin-865n6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 etcd-addons-377526                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-whbx4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-addons-377526                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-addons-377526        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-vrrb5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-addons-377526                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 metrics-server-85b7d694d7-pckkq              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         111s
	  kube-system                 nvidia-device-plugin-daemonset-qpgbq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-6b586f9694-pd2mr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-creds-764b6fb674-hdrg9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 registry-proxy-nlsrb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 snapshot-controller-7d9fbc56b8-tx8sz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-7d9fbc56b8-zphwq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  local-path-storage          local-path-provisioner-648f6765c9-zx7zg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lzq6q               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 114s                 kube-proxy       
	  Normal   Starting                 2m10s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m10s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node addons-377526 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node addons-377526 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node addons-377526 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s                 kubelet          Node addons-377526 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m1s                 kubelet          Node addons-377526 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s                 kubelet          Node addons-377526 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           117s                 node-controller  Node addons-377526 event: Registered Node addons-377526 in Controller
	  Normal   NodeReady                74s                  kubelet          Node addons-377526 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e] <==
	{"level":"warn","ts":"2025-12-09T04:17:37.410946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.425336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.463716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.473094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.495097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.503233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.518671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.532924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.548150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.565314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.583531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.595640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.609358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.624201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.646089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.673294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.690866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.702712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:37.769249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:52.993204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:17:53.014095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.447842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.462965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.492111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T04:18:15.507859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46364","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [106f96ef6ff47529e324d2dc323a5c98b17fab4f289dd307202e27b7bde1f08d] <==
	2025/12/09 04:19:15 GCP Auth Webhook started!
	2025/12/09 04:19:31 Ready to marshal response ...
	2025/12/09 04:19:31 Ready to write response ...
	2025/12/09 04:19:31 Ready to marshal response ...
	2025/12/09 04:19:31 Ready to write response ...
	2025/12/09 04:19:31 Ready to marshal response ...
	2025/12/09 04:19:31 Ready to write response ...
	
	
	==> kernel <==
	 04:19:42 up  9:02,  0 user,  load average: 4.30, 2.16, 1.69
	Linux addons-377526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef] <==
	I1209 04:17:47.625268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 04:17:47.625484       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1209 04:18:17.625723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1209 04:18:17.625726       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1209 04:18:17.625853       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1209 04:18:17.625998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1209 04:18:19.225971       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 04:18:19.226004       1 metrics.go:72] Registering metrics
	I1209 04:18:19.226075       1 controller.go:711] "Syncing nftables rules"
	I1209 04:18:27.630658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:18:27.630699       1 main.go:301] handling current node
	I1209 04:18:37.626724       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:18:37.626768       1 main.go:301] handling current node
	I1209 04:18:47.626652       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:18:47.626699       1 main.go:301] handling current node
	I1209 04:18:57.625479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:18:57.625616       1 main.go:301] handling current node
	I1209 04:19:07.624757       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:19:07.624787       1 main.go:301] handling current node
	I1209 04:19:17.625394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:19:17.625492       1 main.go:301] handling current node
	I1209 04:19:27.625572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:19:27.625603       1 main.go:301] handling current node
	I1209 04:19:37.625051       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 04:19:37.625087       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7] <==
	W1209 04:17:53.013952       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1209 04:17:54.754788       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.8.221"}
	W1209 04:18:15.447585       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 04:18:15.462652       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 04:18:15.492005       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 04:18:15.507378       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1209 04:18:28.058712       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.8.221:443: connect: connection refused
	E1209 04:18:28.058832       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.8.221:443: connect: connection refused" logger="UnhandledError"
	W1209 04:18:28.059406       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.8.221:443: connect: connection refused
	E1209 04:18:28.059482       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.8.221:443: connect: connection refused" logger="UnhandledError"
	W1209 04:18:28.162112       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.8.221:443: connect: connection refused
	E1209 04:18:28.162159       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.8.221:443: connect: connection refused" logger="UnhandledError"
	E1209 04:18:44.472799       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	W1209 04:18:44.472889       1 handler_proxy.go:99] no RequestInfo found in the context
	E1209 04:18:44.472941       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1209 04:18:44.473599       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	E1209 04:18:44.480165       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	E1209 04:18:44.499299       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.212.123:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.212.123:443: connect: connection refused" logger="UnhandledError"
	I1209 04:18:44.649952       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 04:19:40.172294       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34630: use of closed network connection
	E1209 04:19:40.417738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34656: use of closed network connection
	E1209 04:19:40.567760       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34672: use of closed network connection
	
	
	==> kube-controller-manager [5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab] <==
	I1209 04:17:45.470589       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 04:17:45.470830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1209 04:17:45.470908       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1209 04:17:45.470553       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 04:17:45.471023       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1209 04:17:45.471086       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 04:17:45.471154       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-377526"
	I1209 04:17:45.471192       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1209 04:17:45.471239       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1209 04:17:45.470782       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 04:17:45.470610       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1209 04:17:45.473792       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 04:17:45.471250       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 04:17:45.475823       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 04:17:45.473800       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 04:17:45.481298       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1209 04:17:51.549722       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1209 04:18:15.440125       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1209 04:18:15.440280       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1209 04:18:15.440343       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1209 04:18:15.480271       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1209 04:18:15.485099       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1209 04:18:15.540694       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 04:18:15.586310       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 04:18:30.481590       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f] <==
	I1209 04:17:47.578106       1 server_linux.go:53] "Using iptables proxy"
	I1209 04:17:47.681299       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 04:17:47.782240       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 04:17:47.782272       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1209 04:17:47.782359       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 04:17:47.830228       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 04:17:47.830284       1 server_linux.go:132] "Using iptables Proxier"
	I1209 04:17:47.844499       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 04:17:47.845834       1 server.go:527] "Version info" version="v1.34.2"
	I1209 04:17:47.845860       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 04:17:47.847423       1 config.go:200] "Starting service config controller"
	I1209 04:17:47.847435       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 04:17:47.847452       1 config.go:106] "Starting endpoint slice config controller"
	I1209 04:17:47.847456       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 04:17:47.847466       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 04:17:47.847477       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 04:17:47.848159       1 config.go:309] "Starting node config controller"
	I1209 04:17:47.848167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 04:17:47.848173       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 04:17:47.948128       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 04:17:47.948174       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 04:17:47.948214       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769] <==
	E1209 04:17:38.487307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 04:17:38.487404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 04:17:38.487452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 04:17:38.487471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 04:17:38.487425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 04:17:38.489680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 04:17:38.489822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 04:17:38.499826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 04:17:38.499992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 04:17:38.500128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1209 04:17:38.500302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 04:17:38.500418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 04:17:39.406004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 04:17:39.530207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 04:17:39.587243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 04:17:39.589592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 04:17:39.595991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 04:17:39.630506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1209 04:17:39.630793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1209 04:17:39.636364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 04:17:39.684539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1209 04:17:39.714113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 04:17:39.752757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 04:17:39.761557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1209 04:17:42.859244       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 04:19:22 addons-377526 kubelet[1267]: I1209 04:19:22.808041    1267 scope.go:117] "RemoveContainer" containerID="5cc4abd67528e810680c809c022776aa902b50e293968e43559cdc64c1e1ad98"
	Dec 09 04:19:22 addons-377526 kubelet[1267]: I1209 04:19:22.868465    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-m8q7w" podStartSLOduration=69.358070928 podStartE2EDuration="1m30.868442139s" podCreationTimestamp="2025-12-09 04:17:52 +0000 UTC" firstStartedPulling="2025-12-09 04:19:00.669628153 +0000 UTC m=+79.754699771" lastFinishedPulling="2025-12-09 04:19:22.179999364 +0000 UTC m=+101.265070982" observedRunningTime="2025-12-09 04:19:22.805089888 +0000 UTC m=+101.890161506" watchObservedRunningTime="2025-12-09 04:19:22.868442139 +0000 UTC m=+101.953513765"
	Dec 09 04:19:24 addons-377526 kubelet[1267]: I1209 04:19:24.004469    1267 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n5c9\" (UniqueName: \"kubernetes.io/projected/ee1eeca9-237e-4e79-a862-8d9adc94559f-kube-api-access-2n5c9\") pod \"ee1eeca9-237e-4e79-a862-8d9adc94559f\" (UID: \"ee1eeca9-237e-4e79-a862-8d9adc94559f\") "
	Dec 09 04:19:24 addons-377526 kubelet[1267]: I1209 04:19:24.015535    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1eeca9-237e-4e79-a862-8d9adc94559f-kube-api-access-2n5c9" (OuterVolumeSpecName: "kube-api-access-2n5c9") pod "ee1eeca9-237e-4e79-a862-8d9adc94559f" (UID: "ee1eeca9-237e-4e79-a862-8d9adc94559f"). InnerVolumeSpecName "kube-api-access-2n5c9". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 04:19:24 addons-377526 kubelet[1267]: I1209 04:19:24.109150    1267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2n5c9\" (UniqueName: \"kubernetes.io/projected/ee1eeca9-237e-4e79-a862-8d9adc94559f-kube-api-access-2n5c9\") on node \"addons-377526\" DevicePath \"\""
	Dec 09 04:19:24 addons-377526 kubelet[1267]: I1209 04:19:24.831741    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e2aac5e6ece0f5ee89850d090a21ad53d440eccbbe594a08949b7640c8cad5"
	Dec 09 04:19:25 addons-377526 kubelet[1267]: I1209 04:19:25.025888    1267 scope.go:117] "RemoveContainer" containerID="b643259eeff0d0300e03ed9f7b8c3bf88583686786237c57bde1b39a9f0056d4"
	Dec 09 04:19:25 addons-377526 kubelet[1267]: I1209 04:19:25.244634    1267 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 09 04:19:25 addons-377526 kubelet[1267]: I1209 04:19:25.244703    1267 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 09 04:19:25 addons-377526 kubelet[1267]: I1209 04:19:25.843031    1267 scope.go:117] "RemoveContainer" containerID="b643259eeff0d0300e03ed9f7b8c3bf88583686786237c57bde1b39a9f0056d4"
	Dec 09 04:19:26 addons-377526 kubelet[1267]: I1209 04:19:26.930054    1267 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tggn\" (UniqueName: \"kubernetes.io/projected/a2b8ad2e-84f4-4902-a47c-e340ea597390-kube-api-access-6tggn\") pod \"a2b8ad2e-84f4-4902-a47c-e340ea597390\" (UID: \"a2b8ad2e-84f4-4902-a47c-e340ea597390\") "
	Dec 09 04:19:26 addons-377526 kubelet[1267]: I1209 04:19:26.936497    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b8ad2e-84f4-4902-a47c-e340ea597390-kube-api-access-6tggn" (OuterVolumeSpecName: "kube-api-access-6tggn") pod "a2b8ad2e-84f4-4902-a47c-e340ea597390" (UID: "a2b8ad2e-84f4-4902-a47c-e340ea597390"). InnerVolumeSpecName "kube-api-access-6tggn". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 04:19:27 addons-377526 kubelet[1267]: I1209 04:19:27.030562    1267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6tggn\" (UniqueName: \"kubernetes.io/projected/a2b8ad2e-84f4-4902-a47c-e340ea597390-kube-api-access-6tggn\") on node \"addons-377526\" DevicePath \"\""
	Dec 09 04:19:27 addons-377526 kubelet[1267]: I1209 04:19:27.860846    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6896fc22efa77bdd8910a7411e3bd853a9831e640d22a95e10638d84663b2e86"
	Dec 09 04:19:28 addons-377526 kubelet[1267]: I1209 04:19:28.896242    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-865n6" podStartSLOduration=1.887721688 podStartE2EDuration="1m0.896223645s" podCreationTimestamp="2025-12-09 04:18:28 +0000 UTC" firstStartedPulling="2025-12-09 04:18:29.082842796 +0000 UTC m=+48.167914414" lastFinishedPulling="2025-12-09 04:19:28.091344753 +0000 UTC m=+107.176416371" observedRunningTime="2025-12-09 04:19:28.895555224 +0000 UTC m=+107.980626858" watchObservedRunningTime="2025-12-09 04:19:28.896223645 +0000 UTC m=+107.981295271"
	Dec 09 04:19:31 addons-377526 kubelet[1267]: I1209 04:19:31.870457    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/57268e01-0d57-4108-a966-2bf34593e140-gcp-creds\") pod \"busybox\" (UID: \"57268e01-0d57-4108-a966-2bf34593e140\") " pod="default/busybox"
	Dec 09 04:19:31 addons-377526 kubelet[1267]: I1209 04:19:31.870512    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs67p\" (UniqueName: \"kubernetes.io/projected/57268e01-0d57-4108-a966-2bf34593e140-kube-api-access-qs67p\") pod \"busybox\" (UID: \"57268e01-0d57-4108-a966-2bf34593e140\") " pod="default/busybox"
	Dec 09 04:19:32 addons-377526 kubelet[1267]: E1209 04:19:32.073110    1267 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 09 04:19:32 addons-377526 kubelet[1267]: E1209 04:19:32.073349    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6de1311b-03a7-4949-9055-39d7b8dbf7fe-gcr-creds podName:6de1311b-03a7-4949-9055-39d7b8dbf7fe nodeName:}" failed. No retries permitted until 2025-12-09 04:20:36.073326997 +0000 UTC m=+175.158398623 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/6de1311b-03a7-4949-9055-39d7b8dbf7fe-gcr-creds") pod "registry-creds-764b6fb674-hdrg9" (UID: "6de1311b-03a7-4949-9055-39d7b8dbf7fe") : secret "registry-creds-gcr" not found
	Dec 09 04:19:32 addons-377526 kubelet[1267]: W1209 04:19:32.192991    1267 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/296d96ed056115803df5e9b6e1f695022ae85b36790b8d9d91c58e0053c079c9/crio-b9653476c6ce35a90e796abb608b9c085986dd556825f4b5dee5f2253f3aa8e7 WatchSource:0}: Error finding container b9653476c6ce35a90e796abb608b9c085986dd556825f4b5dee5f2253f3aa8e7: Status 404 returned error can't find the container with id b9653476c6ce35a90e796abb608b9c085986dd556825f4b5dee5f2253f3aa8e7
	Dec 09 04:19:33 addons-377526 kubelet[1267]: I1209 04:19:33.028664    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96de7eec-146a-4507-b8cc-61cd4af2632a" path="/var/lib/kubelet/pods/96de7eec-146a-4507-b8cc-61cd4af2632a/volumes"
	Dec 09 04:19:34 addons-377526 kubelet[1267]: I1209 04:19:34.912025    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.087306275 podStartE2EDuration="3.912006208s" podCreationTimestamp="2025-12-09 04:19:31 +0000 UTC" firstStartedPulling="2025-12-09 04:19:32.196932112 +0000 UTC m=+111.282003730" lastFinishedPulling="2025-12-09 04:19:34.021632045 +0000 UTC m=+113.106703663" observedRunningTime="2025-12-09 04:19:34.911358841 +0000 UTC m=+113.996430459" watchObservedRunningTime="2025-12-09 04:19:34.912006208 +0000 UTC m=+113.997077817"
	Dec 09 04:19:41 addons-377526 kubelet[1267]: I1209 04:19:41.104410    1267 scope.go:117] "RemoveContainer" containerID="46ec56406d657199d488bd6d2139f93ca58a128e19bcf1b7c63036fea80c323a"
	Dec 09 04:19:41 addons-377526 kubelet[1267]: E1209 04:19:41.226658    1267 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f9e2fa3263f8477ab5eba29240769027c6742311cf32e2b1fddb98581f64738b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f9e2fa3263f8477ab5eba29240769027c6742311cf32e2b1fddb98581f64738b/diff: no such file or directory, extraDiskErr: <nil>
	Dec 09 04:19:41 addons-377526 kubelet[1267]: E1209 04:19:41.236853    1267 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f802a625c6f2e15bcc484a51069a1513fa2bd34beaa0b71e3a1d5013b5ce74f1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f802a625c6f2e15bcc484a51069a1513fa2bd34beaa0b71e3a1d5013b5ce74f1/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-px7vt_ee1eeca9-237e-4e79-a862-8d9adc94559f/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-px7vt_ee1eeca9-237e-4e79-a862-8d9adc94559f/patch/1.log: no such file or directory
	
	
	==> storage-provisioner [ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af] <==
	W1209 04:19:17.550800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:19.554411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:19.560160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:21.563392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:21.572496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:23.590986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:23.596364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:25.599719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:25.604601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:27.610733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:27.628326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:29.632556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:29.639899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:31.651629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:31.658250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:33.661152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:33.667153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:35.670419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:35.675102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:37.678567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:37.685677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:39.689244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:39.696084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:41.699697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1209 04:19:41.708604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-377526 -n addons-377526
helpers_test.go:269: (dbg) Run:  kubectl --context addons-377526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-px7vt ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7 registry-creds-764b6fb674-hdrg9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-377526 describe pod gcp-auth-certs-patch-px7vt ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7 registry-creds-764b6fb674-hdrg9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-377526 describe pod gcp-auth-certs-patch-px7vt ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7 registry-creds-764b6fb674-hdrg9: exit status 1 (97.25767ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-px7vt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-stzpv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tj9l7" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-hdrg9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-377526 describe pod gcp-auth-certs-patch-px7vt ingress-nginx-admission-create-stzpv ingress-nginx-admission-patch-tj9l7 registry-creds-764b6fb674-hdrg9: exit status 1
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable headlamp --alsologtostderr -v=1: exit status 11 (291.212701ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:19:44.037046 1588060 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:19:44.037896 1588060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:44.037920 1588060 out.go:374] Setting ErrFile to fd 2...
	I1209 04:19:44.037926 1588060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:44.038242 1588060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:19:44.038664 1588060 mustload.go:66] Loading cluster: addons-377526
	I1209 04:19:44.039067 1588060 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:44.039088 1588060 addons.go:622] checking whether the cluster is paused
	I1209 04:19:44.039200 1588060 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:44.039219 1588060 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:19:44.039762 1588060 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:19:44.058562 1588060 ssh_runner.go:195] Run: systemctl --version
	I1209 04:19:44.058656 1588060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:19:44.077065 1588060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:19:44.185654 1588060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:19:44.185745 1588060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:19:44.220943 1588060 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:19:44.220971 1588060 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:19:44.220977 1588060 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:19:44.220981 1588060 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:19:44.220985 1588060 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:19:44.220989 1588060 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:19:44.220992 1588060 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:19:44.220996 1588060 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:19:44.220999 1588060 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:19:44.221007 1588060 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:19:44.221017 1588060 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:19:44.221021 1588060 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:19:44.221024 1588060 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:19:44.221028 1588060 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:19:44.221030 1588060 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:19:44.221039 1588060 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:19:44.221047 1588060 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:19:44.221052 1588060 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:19:44.221055 1588060 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:19:44.221059 1588060 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:19:44.221064 1588060 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:19:44.221067 1588060 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:19:44.221070 1588060 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:19:44.221073 1588060 cri.go:89] found id: ""
	I1209 04:19:44.221133 1588060 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:19:44.236511 1588060 out.go:203] 
	W1209 04:19:44.239472 1588060 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:19:44.239494 1588060 out.go:285] * 
	* 
	W1209 04:19:44.249982 1588060 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:19:44.253050 1588060 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-bg8ss" [b6af9a1f-fde0-4b90-a047-75afe267610d] Running
addons_test.go:900: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003320372s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (276.098417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:02.882285 1588551 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:02.883090 1588551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:02.883106 1588551 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:02.883113 1588551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:02.883378 1588551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:02.883710 1588551 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:02.884146 1588551 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:02.884165 1588551 addons.go:622] checking whether the cluster is paused
	I1209 04:20:02.884274 1588551 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:02.884289 1588551 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:02.884829 1588551 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:02.905110 1588551 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:02.905170 1588551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:02.924908 1588551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:03.034814 1588551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:03.034910 1588551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:03.068026 1588551 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:03.068049 1588551 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:03.068056 1588551 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:03.068060 1588551 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:03.068063 1588551 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:03.068067 1588551 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:03.068072 1588551 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:03.068076 1588551 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:03.068079 1588551 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:03.068086 1588551 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:03.068089 1588551 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:03.068093 1588551 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:03.068096 1588551 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:03.068101 1588551 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:03.068104 1588551 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:03.068109 1588551 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:03.068116 1588551 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:03.068119 1588551 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:03.068123 1588551 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:03.068126 1588551 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:03.068130 1588551 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:03.068134 1588551 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:03.068137 1588551 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:03.068140 1588551 cri.go:89] found id: ""
	I1209 04:20:03.068191 1588551 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:03.084759 1588551 out.go:203] 
	W1209 04:20:03.088164 1588551 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:03.088271 1588551 out.go:285] * 
	* 
	W1209 04:20:03.096820 1588551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:03.100071 1588551 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:1009: (dbg) Run:  kubectl --context addons-377526 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:1015: (dbg) Run:  kubectl --context addons-377526 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:1019: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377526 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f9dc92c1-44ed-4aff-8bbe-40ce704baa7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f9dc92c1-44ed-4aff-8bbe-40ce704baa7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f9dc92c1-44ed-4aff-8bbe-40ce704baa7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:1022: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003979687s
addons_test.go:1027: (dbg) Run:  kubectl --context addons-377526 get pvc test-pvc -o=json
addons_test.go:1036: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 ssh "cat /opt/local-path-provisioner/pvc-e33aa920-6724-4d1b-b3c6-a639b5fc9291_default_test-pvc/file1"
addons_test.go:1048: (dbg) Run:  kubectl --context addons-377526 delete pod test-local-path
addons_test.go:1052: (dbg) Run:  kubectl --context addons-377526 delete pvc test-pvc
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (318.821934ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:20:03.945482 1588688 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:20:03.946423 1588688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:03.946464 1588688 out.go:374] Setting ErrFile to fd 2...
	I1209 04:20:03.946486 1588688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:20:03.946821 1588688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:20:03.947184 1588688 mustload.go:66] Loading cluster: addons-377526
	I1209 04:20:03.947617 1588688 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:03.947656 1588688 addons.go:622] checking whether the cluster is paused
	I1209 04:20:03.947783 1588688 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:20:03.947818 1588688 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:20:03.948369 1588688 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:20:03.965879 1588688 ssh_runner.go:195] Run: systemctl --version
	I1209 04:20:03.965930 1588688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:20:03.998056 1588688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:20:04.113701 1588688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:20:04.113811 1588688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:20:04.147413 1588688 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:20:04.147434 1588688 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:20:04.147440 1588688 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:20:04.147444 1588688 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:20:04.147447 1588688 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:20:04.147451 1588688 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:20:04.147454 1588688 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:20:04.147457 1588688 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:20:04.147461 1588688 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:20:04.147468 1588688 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:20:04.147471 1588688 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:20:04.147475 1588688 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:20:04.147477 1588688 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:20:04.147481 1588688 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:20:04.147484 1588688 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:20:04.147492 1588688 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:20:04.147499 1588688 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:20:04.147507 1588688 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:20:04.147510 1588688 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:20:04.147513 1588688 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:20:04.147517 1588688 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:20:04.147526 1588688 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:20:04.147529 1588688 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:20:04.147532 1588688 cri.go:89] found id: ""
	I1209 04:20:04.147583 1588688 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:20:04.183280 1588688 out.go:203] 
	W1209 04:20:04.186344 1588688 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:20:04.186378 1588688 out.go:285] * 
	* 
	W1209 04:20:04.195640 1588688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:20:04.202616 1588688 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qpgbq" [bbfd593a-3793-4122-af52-8a0e32e51d36] Running
addons_test.go:1085: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003130664s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (268.798505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:19:56.603020 1588394 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:19:56.603678 1588394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:56.603693 1588394 out.go:374] Setting ErrFile to fd 2...
	I1209 04:19:56.603699 1588394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:56.603958 1588394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:19:56.604241 1588394 mustload.go:66] Loading cluster: addons-377526
	I1209 04:19:56.604612 1588394 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:56.604637 1588394 addons.go:622] checking whether the cluster is paused
	I1209 04:19:56.604748 1588394 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:56.604806 1588394 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:19:56.605327 1588394 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:19:56.630027 1588394 ssh_runner.go:195] Run: systemctl --version
	I1209 04:19:56.630097 1588394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:19:56.646628 1588394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:19:56.753271 1588394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:19:56.753356 1588394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:19:56.783214 1588394 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:19:56.783237 1588394 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:19:56.783243 1588394 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:19:56.783247 1588394 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:19:56.783251 1588394 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:19:56.783255 1588394 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:19:56.783258 1588394 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:19:56.783261 1588394 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:19:56.783264 1588394 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:19:56.783289 1588394 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:19:56.783302 1588394 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:19:56.783305 1588394 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:19:56.783308 1588394 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:19:56.783311 1588394 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:19:56.783315 1588394 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:19:56.783319 1588394 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:19:56.783327 1588394 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:19:56.783332 1588394 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:19:56.783336 1588394 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:19:56.783338 1588394 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:19:56.783344 1588394 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:19:56.783363 1588394 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:19:56.783379 1588394 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:19:56.783388 1588394 cri.go:89] found id: ""
	I1209 04:19:56.783438 1588394 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:19:56.800142 1588394 out.go:203] 
	W1209 04:19:56.803338 1588394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:19:56.803367 1588394 out.go:285] * 
	* 
	W1209 04:19:56.812011 1588394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:19:56.815728 1588394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lzq6q" [3db4f093-78c4-4111-ab20-20118e515b2b] Running
addons_test.go:1107: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00358522s
addons_test.go:1113: (dbg) Run:  out/minikube-linux-arm64 -p addons-377526 addons disable yakd --alsologtostderr -v=1
addons_test.go:1113: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377526 addons disable yakd --alsologtostderr -v=1: exit status 11 (282.36988ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:19:50.321699 1588125 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:19:50.322410 1588125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:50.322423 1588125 out.go:374] Setting ErrFile to fd 2...
	I1209 04:19:50.322434 1588125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:19:50.322737 1588125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:19:50.323035 1588125 mustload.go:66] Loading cluster: addons-377526
	I1209 04:19:50.323420 1588125 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:50.323437 1588125 addons.go:622] checking whether the cluster is paused
	I1209 04:19:50.323605 1588125 config.go:182] Loaded profile config "addons-377526": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:19:50.323624 1588125 host.go:66] Checking if "addons-377526" exists ...
	I1209 04:19:50.324221 1588125 cli_runner.go:164] Run: docker container inspect addons-377526 --format={{.State.Status}}
	I1209 04:19:50.341572 1588125 ssh_runner.go:195] Run: systemctl --version
	I1209 04:19:50.341629 1588125 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377526
	I1209 04:19:50.359265 1588125 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34240 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/addons-377526/id_rsa Username:docker}
	I1209 04:19:50.466066 1588125 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:19:50.466152 1588125 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:19:50.510079 1588125 cri.go:89] found id: "c04442e39ef35fdc720b3c2bb3a77da977256d816f2eec2ebcfa6b979f8d0968"
	I1209 04:19:50.510097 1588125 cri.go:89] found id: "7aebdd3431a655622c91099e2e13d404de79d2d92cd3744233ad482bd5950b4a"
	I1209 04:19:50.510106 1588125 cri.go:89] found id: "18febaede59c2967af53b607d5a0971f75da0dffdc720977888c74bc4b43f989"
	I1209 04:19:50.510111 1588125 cri.go:89] found id: "8cf0b6bd32f5bb3b5d0c99a5cb73fc3b6625311dbba876d4d3e383bbd52b8844"
	I1209 04:19:50.510114 1588125 cri.go:89] found id: "174130d7501d2a4338753b358cf8658f2791da0197e2ddee56f4682364d0e5ce"
	I1209 04:19:50.510118 1588125 cri.go:89] found id: "a69a96490b5aefb4b7039ba55efc49cccbd001d0e16126c16649afdae1e0e5be"
	I1209 04:19:50.510121 1588125 cri.go:89] found id: "cc37fac1bc08a55afea23e467cf7ab65d053708170c6c35c316845ac5ad895e5"
	I1209 04:19:50.510124 1588125 cri.go:89] found id: "9489ae99adda39fae4cb5dfa918abcbcec4c6b2882922f49b01c09790b02500b"
	I1209 04:19:50.510127 1588125 cri.go:89] found id: "197524f2c1763b0f2e842c6b573a4d1bfb3cf7dfa8bea6daacdeff861043d351"
	I1209 04:19:50.510134 1588125 cri.go:89] found id: "8d2bef8d891580f057b9dca614e75513beeac88caf7536355ac38b71a4929ee5"
	I1209 04:19:50.510137 1588125 cri.go:89] found id: "e89fcd7e7a65121ec84cd2c9d89bbf436ccc5090968a417d230a03fafb1d57cb"
	I1209 04:19:50.510140 1588125 cri.go:89] found id: "9b3a8c868c3c905e36617afaf33522db2b0959f5baf822b5b3bad893fa0da43a"
	I1209 04:19:50.510143 1588125 cri.go:89] found id: "d448cac096a040574fbee288ffbf1b79d931e05be65b8699003d18c35b213d99"
	I1209 04:19:50.510146 1588125 cri.go:89] found id: "a549d8652b346e26791e868967bc4ba6691a6f3e6d6890628c34d5aaabaee422"
	I1209 04:19:50.510149 1588125 cri.go:89] found id: "365b8c540ac8b4ba2ffbea68247ecdcb4b22e31ec4b497e44af8153b9232cba0"
	I1209 04:19:50.510153 1588125 cri.go:89] found id: "ade186251b0b03d5e21b3b509f2bf86293ef5ea617865111f2dd375f2cfaa2af"
	I1209 04:19:50.510157 1588125 cri.go:89] found id: "895a853e4aab3bfd20dc33efe93732055e9143ac6017c4be43840f854767cfac"
	I1209 04:19:50.510160 1588125 cri.go:89] found id: "3f583b93b3d82da13bf4c0cc7590397283a9f565f160c0b4aad9b625564dde0f"
	I1209 04:19:50.510163 1588125 cri.go:89] found id: "f23d383bb901021ad468c9e01555bb740a0facf5322dcee6b0def8a8f5c26cef"
	I1209 04:19:50.510167 1588125 cri.go:89] found id: "3e19f8eb0be8689c1e6db170c4a1893db77016e40e2d7ee36ae46433d1ab5dc7"
	I1209 04:19:50.510171 1588125 cri.go:89] found id: "5f20869a412bbccdd019d0d88792fb1e038ef017fb684b743afc406185107fab"
	I1209 04:19:50.510174 1588125 cri.go:89] found id: "23444ddd657bbd00eed4c8df42d61dc49f01325e6c8f6ca46b95e4e0ebfec769"
	I1209 04:19:50.510177 1588125 cri.go:89] found id: "3d9befd5158d0fb9dcd408b398d0ade47c7417da742e387aa66109ca8ed7918e"
	I1209 04:19:50.510179 1588125 cri.go:89] found id: ""
	I1209 04:19:50.510231 1588125 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 04:19:50.528746 1588125 out.go:203] 
	W1209 04:19:50.531738 1588125 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:19:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 04:19:50.531818 1588125 out.go:285] * 
	* 
	W1209 04:19:50.539833 1588125 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:19:50.542772 1588125 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1115: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-377526 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1209 04:27:15.848757 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:29:31.980465 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:29:59.695464 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:21.787639 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:21.794109 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:21.805869 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:21.827402 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:21.868792 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:21.950313 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:22.112062 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:22.434089 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:23.076142 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:24.357582 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:26.919070 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:32.041114 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:31:42.283327 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:32:02.764757 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:32:43.726450 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:34:05.648340 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:34:31.980081 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.568807882s)

                                                
                                                
-- stdout --
	* [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Found network options:
	  - HTTP_PROXY=localhost:42299
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:42299 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-331811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-331811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000247658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000051413s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000051413s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 6 (328.490245ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 04:35:35.733727 1614310 status.go:458] kubeconfig endpoint: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-790468 image load --daemon kicbase/echo-server:functional-790468 --alsologtostderr                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /usr/share/ca-certificates/1580521.pem                                                                                     │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/15805212.pem                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image save kicbase/echo-server:functional-790468 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /usr/share/ca-certificates/15805212.pem                                                                                    │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image rm kicbase/echo-server:functional-790468 --alsologtostderr                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image save --daemon kicbase/echo-server:functional-790468 --alsologtostderr                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format yaml --alsologtostderr                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format short --alsologtostderr                                                                                               │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh pgrep buildkitd                                                                                                                     │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image          │ functional-790468 image ls --format json --alsologtostderr                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format table --alsologtostderr                                                                                               │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                                    │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete         │ -p functional-790468                                                                                                                                      │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start          │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:27:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:27:14.873218 1608727 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:27:14.873352 1608727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:27:14.873356 1608727 out.go:374] Setting ErrFile to fd 2...
	I1209 04:27:14.873360 1608727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:27:14.873630 1608727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:27:14.874024 1608727 out.go:368] Setting JSON to false
	I1209 04:27:14.874889 1608727 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32975,"bootTime":1765221460,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:27:14.874950 1608727 start.go:143] virtualization:  
	I1209 04:27:14.879136 1608727 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:27:14.883215 1608727 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:27:14.883356 1608727 notify.go:221] Checking for updates...
	I1209 04:27:14.889376 1608727 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:27:14.892735 1608727 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:27:14.895687 1608727 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:27:14.898583 1608727 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:27:14.901460 1608727 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:27:14.904522 1608727 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:27:14.928251 1608727 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:27:14.928369 1608727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:27:14.989891 1608727 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:27:14.980210859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:27:14.990015 1608727 docker.go:319] overlay module found
	I1209 04:27:14.993190 1608727 out.go:179] * Using the docker driver based on user configuration
	I1209 04:27:14.996167 1608727 start.go:309] selected driver: docker
	I1209 04:27:14.996177 1608727 start.go:927] validating driver "docker" against <nil>
	I1209 04:27:14.996190 1608727 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:27:14.996994 1608727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:27:15.067500 1608727 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:27:15.057510611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:27:15.067645 1608727 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:27:15.067861 1608727 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:27:15.070887 1608727 out.go:179] * Using Docker driver with root privileges
	I1209 04:27:15.074002 1608727 cni.go:84] Creating CNI manager for ""
	I1209 04:27:15.074072 1608727 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:27:15.074080 1608727 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:27:15.074170 1608727 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:27:15.079175 1608727 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:27:15.082091 1608727 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:27:15.085081 1608727 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:27:15.087885 1608727 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:27:15.087943 1608727 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:27:15.087950 1608727 cache.go:65] Caching tarball of preloaded images
	I1209 04:27:15.087966 1608727 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:27:15.088039 1608727 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:27:15.088049 1608727 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:27:15.088413 1608727 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:27:15.088433 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json: {Name:mk8f02041b88ee32d1ef68ec2825a3672a273711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:15.110365 1608727 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:27:15.110379 1608727 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:27:15.110418 1608727 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:27:15.110451 1608727 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:27:15.110628 1608727 start.go:364] duration metric: took 159.763µs to acquireMachinesLock for "functional-331811"
	I1209 04:27:15.110658 1608727 start.go:93] Provisioning new machine with config: &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:27:15.110729 1608727 start.go:125] createHost starting for "" (driver="docker")
	I1209 04:27:15.114364 1608727 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1209 04:27:15.114725 1608727 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:42299 to docker env.
	I1209 04:27:15.114753 1608727 start.go:159] libmachine.API.Create for "functional-331811" (driver="docker")
	I1209 04:27:15.114777 1608727 client.go:173] LocalClient.Create starting
	I1209 04:27:15.114842 1608727 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem
	I1209 04:27:15.114878 1608727 main.go:143] libmachine: Decoding PEM data...
	I1209 04:27:15.114893 1608727 main.go:143] libmachine: Parsing certificate...
	I1209 04:27:15.114962 1608727 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem
	I1209 04:27:15.114985 1608727 main.go:143] libmachine: Decoding PEM data...
	I1209 04:27:15.115001 1608727 main.go:143] libmachine: Parsing certificate...
	I1209 04:27:15.115406 1608727 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 04:27:15.132873 1608727 cli_runner.go:211] docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 04:27:15.132945 1608727 network_create.go:284] running [docker network inspect functional-331811] to gather additional debugging logs...
	I1209 04:27:15.132961 1608727 cli_runner.go:164] Run: docker network inspect functional-331811
	W1209 04:27:15.149840 1608727 cli_runner.go:211] docker network inspect functional-331811 returned with exit code 1
	I1209 04:27:15.149874 1608727 network_create.go:287] error running [docker network inspect functional-331811]: docker network inspect functional-331811: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-331811 not found
	I1209 04:27:15.149886 1608727 network_create.go:289] output of [docker network inspect functional-331811]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-331811 not found
	
	** /stderr **
	I1209 04:27:15.149997 1608727 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:27:15.166405 1608727 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018cc760}
	I1209 04:27:15.166434 1608727 network_create.go:124] attempt to create docker network functional-331811 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 04:27:15.166484 1608727 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-331811 functional-331811
	I1209 04:27:15.221906 1608727 network_create.go:108] docker network functional-331811 192.168.49.0/24 created
	I1209 04:27:15.221928 1608727 kic.go:121] calculated static IP "192.168.49.2" for the "functional-331811" container
	I1209 04:27:15.222024 1608727 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 04:27:15.236872 1608727 cli_runner.go:164] Run: docker volume create functional-331811 --label name.minikube.sigs.k8s.io=functional-331811 --label created_by.minikube.sigs.k8s.io=true
	I1209 04:27:15.255183 1608727 oci.go:103] Successfully created a docker volume functional-331811
	I1209 04:27:15.255256 1608727 cli_runner.go:164] Run: docker run --rm --name functional-331811-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-331811 --entrypoint /usr/bin/test -v functional-331811:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 04:27:15.801381 1608727 oci.go:107] Successfully prepared a docker volume functional-331811
	I1209 04:27:15.801428 1608727 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:27:15.801436 1608727 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 04:27:15.801504 1608727 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-331811:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 04:27:19.704009 1608727 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-331811:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.90245716s)
	I1209 04:27:19.704031 1608727 kic.go:203] duration metric: took 3.902590764s to extract preloaded images to volume ...
	W1209 04:27:19.704162 1608727 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 04:27:19.704264 1608727 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 04:27:19.756682 1608727 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-331811 --name functional-331811 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-331811 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-331811 --network functional-331811 --ip 192.168.49.2 --volume functional-331811:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 04:27:20.058296 1608727 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Running}}
	I1209 04:27:20.084470 1608727 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:27:20.113785 1608727 cli_runner.go:164] Run: docker exec functional-331811 stat /var/lib/dpkg/alternatives/iptables
	I1209 04:27:20.161424 1608727 oci.go:144] the created container "functional-331811" has a running status.
	I1209 04:27:20.161443 1608727 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa...
	I1209 04:27:20.567622 1608727 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 04:27:20.587180 1608727 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:27:20.603913 1608727 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 04:27:20.603924 1608727 kic_runner.go:114] Args: [docker exec --privileged functional-331811 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 04:27:20.643422 1608727 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:27:20.661859 1608727 machine.go:94] provisionDockerMachine start ...
	I1209 04:27:20.661942 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:20.686216 1608727 main.go:143] libmachine: Using SSH client type: native
	I1209 04:27:20.686548 1608727 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:27:20.686556 1608727 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:27:20.687184 1608727 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53252->127.0.0.1:34255: read: connection reset by peer
	I1209 04:27:23.842133 1608727 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:27:23.842148 1608727 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:27:23.842222 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:23.859708 1608727 main.go:143] libmachine: Using SSH client type: native
	I1209 04:27:23.860021 1608727 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:27:23.860030 1608727 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:27:24.030314 1608727 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:27:24.030385 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:24.049795 1608727 main.go:143] libmachine: Using SSH client type: native
	I1209 04:27:24.050106 1608727 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:27:24.050119 1608727 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:27:24.202866 1608727 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:27:24.202883 1608727 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:27:24.202915 1608727 ubuntu.go:190] setting up certificates
	I1209 04:27:24.202923 1608727 provision.go:84] configureAuth start
	I1209 04:27:24.202980 1608727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:27:24.220070 1608727 provision.go:143] copyHostCerts
	I1209 04:27:24.220131 1608727 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:27:24.220169 1608727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:27:24.220248 1608727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:27:24.220350 1608727 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:27:24.220355 1608727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:27:24.220380 1608727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:27:24.220427 1608727 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:27:24.220430 1608727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:27:24.220453 1608727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:27:24.220494 1608727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:27:24.388776 1608727 provision.go:177] copyRemoteCerts
	I1209 04:27:24.388835 1608727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:27:24.388872 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:24.405891 1608727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:27:24.510845 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:27:24.528442 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:27:24.545543 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:27:24.562391 1608727 provision.go:87] duration metric: took 359.444367ms to configureAuth
	I1209 04:27:24.562408 1608727 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:27:24.562633 1608727 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:27:24.562743 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:24.579583 1608727 main.go:143] libmachine: Using SSH client type: native
	I1209 04:27:24.579920 1608727 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:27:24.579940 1608727 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:27:24.885779 1608727 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:27:24.885791 1608727 machine.go:97] duration metric: took 4.223920347s to provisionDockerMachine
	I1209 04:27:24.885800 1608727 client.go:176] duration metric: took 9.7710189s to LocalClient.Create
	I1209 04:27:24.885821 1608727 start.go:167] duration metric: took 9.771068862s to libmachine.API.Create "functional-331811"
	I1209 04:27:24.885828 1608727 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:27:24.885839 1608727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:27:24.885900 1608727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:27:24.885937 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:24.902552 1608727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:27:25.010395 1608727 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:27:25.014538 1608727 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:27:25.014557 1608727 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:27:25.014586 1608727 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:27:25.014647 1608727 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:27:25.014743 1608727 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:27:25.014830 1608727 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:27:25.014878 1608727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:27:25.023511 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:27:25.042516 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:27:25.059936 1608727 start.go:296] duration metric: took 174.094259ms for postStartSetup
	I1209 04:27:25.060312 1608727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:27:25.077297 1608727 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:27:25.077561 1608727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:27:25.077609 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:25.094490 1608727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:27:25.195273 1608727 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:27:25.199678 1608727 start.go:128] duration metric: took 10.088935921s to createHost
	I1209 04:27:25.199691 1608727 start.go:83] releasing machines lock for "functional-331811", held for 10.08905579s
	I1209 04:27:25.199759 1608727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:27:25.220097 1608727 out.go:179] * Found network options:
	I1209 04:27:25.223094 1608727 out.go:179]   - HTTP_PROXY=localhost:42299
	W1209 04:27:25.225889 1608727 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1209 04:27:25.228835 1608727 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1209 04:27:25.231672 1608727 ssh_runner.go:195] Run: cat /version.json
	I1209 04:27:25.231711 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:25.231773 1608727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:27:25.231823 1608727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:27:25.250791 1608727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:27:25.252037 1608727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:27:25.354931 1608727 ssh_runner.go:195] Run: systemctl --version
	I1209 04:27:25.440733 1608727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:27:25.476715 1608727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:27:25.480939 1608727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:27:25.481002 1608727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:27:25.507843 1608727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 04:27:25.507857 1608727 start.go:496] detecting cgroup driver to use...
	I1209 04:27:25.507887 1608727 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:27:25.507937 1608727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:27:25.524484 1608727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:27:25.536994 1608727 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:27:25.537048 1608727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:27:25.554349 1608727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:27:25.571472 1608727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:27:25.688713 1608727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:27:25.809528 1608727 docker.go:234] disabling docker service ...
	I1209 04:27:25.809584 1608727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:27:25.832394 1608727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:27:25.845561 1608727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:27:25.958248 1608727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:27:26.084933 1608727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:27:26.098699 1608727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:27:26.112392 1608727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:27:26.112445 1608727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.121018 1608727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:27:26.121089 1608727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.129874 1608727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.139087 1608727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.148098 1608727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:27:26.156426 1608727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.164954 1608727 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.178119 1608727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:27:26.186969 1608727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:27:26.194209 1608727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:27:26.201205 1608727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:27:26.317100 1608727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:27:26.493977 1608727 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:27:26.494049 1608727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:27:26.497697 1608727 start.go:564] Will wait 60s for crictl version
	I1209 04:27:26.497765 1608727 ssh_runner.go:195] Run: which crictl
	I1209 04:27:26.501138 1608727 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:27:26.525509 1608727 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:27:26.525620 1608727 ssh_runner.go:195] Run: crio --version
	I1209 04:27:26.553612 1608727 ssh_runner.go:195] Run: crio --version
	I1209 04:27:26.584313 1608727 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:27:26.586892 1608727 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:27:26.602482 1608727 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:27:26.606432 1608727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:27:26.616434 1608727 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:27:26.616543 1608727 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:27:26.616594 1608727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:27:26.655979 1608727 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:27:26.655991 1608727 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:27:26.656045 1608727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:27:26.684758 1608727 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:27:26.684770 1608727 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:27:26.684776 1608727 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:27:26.684865 1608727 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:27:26.684942 1608727 ssh_runner.go:195] Run: crio config
	I1209 04:27:26.758178 1608727 cni.go:84] Creating CNI manager for ""
	I1209 04:27:26.758210 1608727 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:27:26.758230 1608727 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:27:26.758256 1608727 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:27:26.758384 1608727 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:27:26.758454 1608727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:27:26.766520 1608727 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:27:26.766603 1608727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:27:26.774118 1608727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:27:26.786950 1608727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:27:26.799467 1608727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1209 04:27:26.812461 1608727 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:27:26.816171 1608727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:27:26.825774 1608727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:27:26.947506 1608727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:27:26.966253 1608727 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:27:26.966264 1608727 certs.go:195] generating shared ca certs ...
	I1209 04:27:26.966278 1608727 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:26.966427 1608727 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:27:26.966470 1608727 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:27:26.966476 1608727 certs.go:257] generating profile certs ...
	I1209 04:27:26.966535 1608727 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:27:26.966544 1608727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt with IP's: []
	I1209 04:27:27.092197 1608727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt ...
	I1209 04:27:27.092214 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: {Name:mk9cef57654ce5f295288c98244cf15ba2658124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:27.092411 1608727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key ...
	I1209 04:27:27.092418 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key: {Name:mkceab5f26ece6b3619e6d576bdc83a12566c0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:27.092505 1608727 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:27:27.092516 1608727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt.29f4af34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 04:27:27.182690 1608727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt.29f4af34 ...
	I1209 04:27:27.182704 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt.29f4af34: {Name:mkd9e4bf12250d4c500bd37936379c3769c90816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:27.182895 1608727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34 ...
	I1209 04:27:27.182902 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34: {Name:mkb1d5c7048862d861a815520c017dce11474b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:27.182984 1608727 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt.29f4af34 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt
	I1209 04:27:27.183093 1608727 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key
	I1209 04:27:27.183151 1608727 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:27:27.183162 1608727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt with IP's: []
	I1209 04:27:27.510076 1608727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt ...
	I1209 04:27:27.510092 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt: {Name:mkc1d46c8b4a67a6c3a623ada07cf37b13fe4f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:27.510277 1608727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key ...
	I1209 04:27:27.510284 1608727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key: {Name:mk3d80036b7da8024429624b077496f46568ee74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:27:27.510460 1608727 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:27:27.510499 1608727 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:27:27.510507 1608727 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:27:27.510544 1608727 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:27:27.510593 1608727 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:27:27.510618 1608727 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:27:27.510665 1608727 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:27:27.511238 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:27:27.528960 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:27:27.546936 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:27:27.564461 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:27:27.581551 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:27:27.599655 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:27:27.616771 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:27:27.633587 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:27:27.650680 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:27:27.671969 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:27:27.689616 1608727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:27:27.707160 1608727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:27:27.719746 1608727 ssh_runner.go:195] Run: openssl version
	I1209 04:27:27.726106 1608727 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:27:27.733638 1608727 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:27:27.740875 1608727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:27:27.744461 1608727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:27:27.744515 1608727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:27:27.786980 1608727 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:27:27.794333 1608727 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 04:27:27.801499 1608727 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:27:27.808955 1608727 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:27:27.816326 1608727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:27:27.820088 1608727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:27:27.820159 1608727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:27:27.863321 1608727 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:27:27.870913 1608727 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1580521.pem /etc/ssl/certs/51391683.0
	I1209 04:27:27.878147 1608727 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:27:27.885523 1608727 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:27:27.893804 1608727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:27:27.897658 1608727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:27:27.897716 1608727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:27:27.939257 1608727 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:27:27.946865 1608727 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/15805212.pem /etc/ssl/certs/3ec20f2e.0
	I1209 04:27:27.954026 1608727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:27:27.957609 1608727 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 04:27:27.957649 1608727 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:27:27.957717 1608727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:27:27.957775 1608727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:27:27.985250 1608727 cri.go:89] found id: ""
	I1209 04:27:27.985320 1608727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:27:27.993064 1608727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:27:28.003044 1608727 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:27:28.003125 1608727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:27:28.012798 1608727 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:27:28.012817 1608727 kubeadm.go:158] found existing configuration files:
	
	I1209 04:27:28.012872 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:27:28.021057 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:27:28.021119 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:27:28.028776 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:27:28.037370 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:27:28.037448 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:27:28.045127 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:27:28.053140 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:27:28.053197 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:27:28.060844 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:27:28.068775 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:27:28.068841 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:27:28.076312 1608727 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:27:28.115324 1608727 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:27:28.115695 1608727 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:27:28.184891 1608727 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:27:28.184971 1608727 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:27:28.185006 1608727 kubeadm.go:319] OS: Linux
	I1209 04:27:28.185059 1608727 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:27:28.185118 1608727 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:27:28.185173 1608727 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:27:28.185230 1608727 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:27:28.185277 1608727 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:27:28.185334 1608727 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:27:28.185402 1608727 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:27:28.185458 1608727 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:27:28.185513 1608727 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:27:28.250097 1608727 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:27:28.250214 1608727 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:27:28.250336 1608727 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:27:28.259007 1608727 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:27:28.265426 1608727 out.go:252]   - Generating certificates and keys ...
	I1209 04:27:28.265522 1608727 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:27:28.265594 1608727 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:27:28.474152 1608727 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 04:27:28.620336 1608727 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 04:27:28.801821 1608727 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 04:27:28.862661 1608727 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 04:27:29.117803 1608727 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 04:27:29.117970 1608727 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-331811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:27:29.228245 1608727 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 04:27:29.228533 1608727 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-331811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:27:29.685917 1608727 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 04:27:29.951838 1608727 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 04:27:30.560978 1608727 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 04:27:30.561115 1608727 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:27:31.040247 1608727 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:27:31.529845 1608727 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:27:32.416152 1608727 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:27:32.495649 1608727 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:27:32.884830 1608727 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:27:32.885510 1608727 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:27:32.888592 1608727 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:27:32.892063 1608727 out.go:252]   - Booting up control plane ...
	I1209 04:27:32.892174 1608727 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:27:32.892251 1608727 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:27:32.893568 1608727 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:27:32.909185 1608727 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:27:32.909286 1608727 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:27:32.916708 1608727 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:27:32.917159 1608727 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:27:32.917358 1608727 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:27:33.048874 1608727 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:27:33.048986 1608727 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:31:33.049054 1608727 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000247658s
	I1209 04:31:33.049082 1608727 kubeadm.go:319] 
	I1209 04:31:33.049187 1608727 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:31:33.049253 1608727 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:31:33.049835 1608727 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:31:33.049858 1608727 kubeadm.go:319] 
	I1209 04:31:33.050054 1608727 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:31:33.050108 1608727 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:31:33.050169 1608727 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:31:33.050176 1608727 kubeadm.go:319] 
	I1209 04:31:33.054517 1608727 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:31:33.054998 1608727 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:31:33.055108 1608727 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:31:33.055376 1608727 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1209 04:31:33.055381 1608727 kubeadm.go:319] 
	I1209 04:31:33.055472 1608727 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 04:31:33.055614 1608727 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-331811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-331811 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000247658s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:31:33.055720 1608727 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:31:33.466421 1608727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:31:33.479186 1608727 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:31:33.479238 1608727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:31:33.486760 1608727 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:31:33.486768 1608727 kubeadm.go:158] found existing configuration files:
	
	I1209 04:31:33.486816 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:31:33.494471 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:31:33.494531 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:31:33.501803 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:31:33.509519 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:31:33.509571 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:31:33.516990 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:31:33.524872 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:31:33.524929 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:31:33.532350 1608727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:31:33.539971 1608727 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:31:33.540023 1608727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:31:33.547341 1608727 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:31:33.588465 1608727 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:31:33.588523 1608727 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:31:33.657403 1608727 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:31:33.657480 1608727 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:31:33.657525 1608727 kubeadm.go:319] OS: Linux
	I1209 04:31:33.657579 1608727 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:31:33.657629 1608727 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:31:33.657685 1608727 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:31:33.657742 1608727 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:31:33.657799 1608727 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:31:33.657849 1608727 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:31:33.657903 1608727 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:31:33.657964 1608727 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:31:33.658013 1608727 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:31:33.723040 1608727 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:31:33.723171 1608727 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:31:33.723274 1608727 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:31:33.730245 1608727 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:31:33.735596 1608727 out.go:252]   - Generating certificates and keys ...
	I1209 04:31:33.735702 1608727 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:31:33.735781 1608727 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:31:33.735879 1608727 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:31:33.735952 1608727 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:31:33.736030 1608727 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:31:33.736098 1608727 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:31:33.736186 1608727 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:31:33.736261 1608727 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:31:33.736346 1608727 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:31:33.736441 1608727 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:31:33.736671 1608727 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:31:33.736728 1608727 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:31:34.144572 1608727 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:31:34.246097 1608727 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:31:34.479884 1608727 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:31:34.575452 1608727 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:31:34.761045 1608727 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:31:34.761875 1608727 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:31:34.764477 1608727 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:31:34.767800 1608727 out.go:252]   - Booting up control plane ...
	I1209 04:31:34.767914 1608727 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:31:34.768015 1608727 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:31:34.768094 1608727 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:31:34.782090 1608727 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:31:34.782192 1608727 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:31:34.792192 1608727 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:31:34.792322 1608727 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:31:34.792368 1608727 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:31:34.931038 1608727 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:31:34.931149 1608727 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:35:34.926771 1608727 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000051413s
	I1209 04:35:34.926790 1608727 kubeadm.go:319] 
	I1209 04:35:34.926850 1608727 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:35:34.926882 1608727 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:35:34.927298 1608727 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:35:34.927317 1608727 kubeadm.go:319] 
	I1209 04:35:34.927655 1608727 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:35:34.927712 1608727 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:35:34.927767 1608727 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:35:34.927773 1608727 kubeadm.go:319] 
	I1209 04:35:34.932536 1608727 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:35:34.933190 1608727 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:35:34.933308 1608727 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:35:34.933570 1608727 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1209 04:35:34.933574 1608727 kubeadm.go:319] 
	I1209 04:35:34.933691 1608727 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:35:34.933723 1608727 kubeadm.go:403] duration metric: took 8m6.976074979s to StartCluster
	I1209 04:35:34.933757 1608727 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:35:34.933814 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:35:34.960431 1608727 cri.go:89] found id: ""
	I1209 04:35:34.960445 1608727 logs.go:282] 0 containers: []
	W1209 04:35:34.960452 1608727 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:35:34.960458 1608727 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:35:34.960516 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:35:34.989431 1608727 cri.go:89] found id: ""
	I1209 04:35:34.989445 1608727 logs.go:282] 0 containers: []
	W1209 04:35:34.989451 1608727 logs.go:284] No container was found matching "etcd"
	I1209 04:35:34.989456 1608727 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:35:34.989522 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:35:35.017000 1608727 cri.go:89] found id: ""
	I1209 04:35:35.017015 1608727 logs.go:282] 0 containers: []
	W1209 04:35:35.017022 1608727 logs.go:284] No container was found matching "coredns"
	I1209 04:35:35.017028 1608727 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:35:35.017095 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:35:35.046028 1608727 cri.go:89] found id: ""
	I1209 04:35:35.046044 1608727 logs.go:282] 0 containers: []
	W1209 04:35:35.046052 1608727 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:35:35.046057 1608727 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:35:35.046119 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:35:35.085196 1608727 cri.go:89] found id: ""
	I1209 04:35:35.085210 1608727 logs.go:282] 0 containers: []
	W1209 04:35:35.085217 1608727 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:35:35.085222 1608727 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:35:35.085287 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:35:35.117308 1608727 cri.go:89] found id: ""
	I1209 04:35:35.117322 1608727 logs.go:282] 0 containers: []
	W1209 04:35:35.117331 1608727 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:35:35.117338 1608727 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:35:35.117403 1608727 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:35:35.151860 1608727 cri.go:89] found id: ""
	I1209 04:35:35.151873 1608727 logs.go:282] 0 containers: []
	W1209 04:35:35.151880 1608727 logs.go:284] No container was found matching "kindnet"
	I1209 04:35:35.151888 1608727 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:35:35.151899 1608727 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:35:35.185567 1608727 logs.go:123] Gathering logs for container status ...
	I1209 04:35:35.185587 1608727 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:35:35.215558 1608727 logs.go:123] Gathering logs for kubelet ...
	I1209 04:35:35.215576 1608727 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:35:35.283392 1608727 logs.go:123] Gathering logs for dmesg ...
	I1209 04:35:35.283411 1608727 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:35:35.298468 1608727 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:35:35.298484 1608727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:35:35.366449 1608727 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:35.357747    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.358537    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.360061    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.360628    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.362242    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:35:35.357747    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.358537    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.360061    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.360628    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:35.362242    4892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1209 04:35:35.366462 1608727 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000051413s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:35:35.366499 1608727 out.go:285] * 
	W1209 04:35:35.366562 1608727 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000051413s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:35:35.366596 1608727 out.go:285] * 
	W1209 04:35:35.368714 1608727 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:35:35.373609 1608727 out.go:203] 
	W1209 04:35:35.377525 1608727 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000051413s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:35:35.377580 1608727 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:35:35.377600 1608727 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:35:35.380754 1608727 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.487688056Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.487858536Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.487969421Z" level=info msg="Create NRI interface"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488128577Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488158214Z" level=info msg="runtime interface created"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488173147Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488180704Z" level=info msg="runtime interface starting up..."
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488186932Z" level=info msg="starting plugins..."
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488201685Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:27:26 functional-331811 crio[843]: time="2025-12-09T04:27:26.488276541Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:27:26 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.253762189Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f6a81315-0f20-460c-a1a6-340a47f92611 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.254896786Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=4f1e188a-d8f7-48b5-82ba-45f3ed8d633f name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.255456807Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f5a8c5f-1957-4a41-b78b-754d9475f093 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.256033255Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=df1af3fe-777e-45de-af3e-d8ddbb883cca name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.256505382Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9ce5923e-4470-4b3e-8079-221c5e6306f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.257038087Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=90a415bf-9ee9-4e06-8278-8b4dfcbf08b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:27:28 functional-331811 crio[843]: time="2025-12-09T04:27:28.257510674Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=02d1666b-5743-4266-9aa4-575ee1c303f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.726105743Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d602a0ff-9a2a-4901-9ee7-3a9bdbd7bd84 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.726983117Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=4aa76f1d-1524-4b9d-b872-039fc3460394 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.7275173Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d8a2917b-27e2-42c7-8f55-fcecaf26eb8d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.7279997Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ae818a96-d296-40df-b5df-fa869083c5de name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.728425311Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=65ae4320-01b6-4883-a27f-61afb6e7e2cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.728882964Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2afcdcf0-f878-4bdf-878d-16c3135728fb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:31:33 functional-331811 crio[843]: time="2025-12-09T04:31:33.72930879Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0d77aa2f-1675-47e7-b7f6-ce60b8308427 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:35:36.386184    4996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:36.386715    4996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:36.388331    4996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:36.388879    4996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:35:36.390499    4996 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:35:36 up  9:17,  0 user,  load average: 0.35, 0.51, 1.02
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:35:33 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:35:34 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 09 04:35:34 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:35:34 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:35:34 functional-331811 kubelet[4805]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:35:34 functional-331811 kubelet[4805]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:35:34 functional-331811 kubelet[4805]: E1209 04:35:34.375665    4805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:35:34 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:35:34 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:35:35 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 09 04:35:35 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:35:35 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:35:35 functional-331811 kubelet[4850]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:35:35 functional-331811 kubelet[4850]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:35:35 functional-331811 kubelet[4850]: E1209 04:35:35.143678    4850 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:35:35 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:35:35 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:35:35 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 09 04:35:35 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:35:35 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:35:35 functional-331811 kubelet[4911]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:35:35 functional-331811 kubelet[4911]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:35:35 functional-331811 kubelet[4911]: E1209 04:35:35.896914    4911 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:35:35 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:35:35 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 6 (359.425052ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 04:35:36.863736 1614526 status.go:458] kubeconfig endpoint: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1209 04:35:36.879577 1580521 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-331811 --alsologtostderr -v=8
E1209 04:36:21.780680 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:36:49.489868 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:39:31.979884 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:40:55.057924 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:41:21.781025 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-331811 --alsologtostderr -v=8: exit status 80 (6m6.221794598s)

                                                
                                                
-- stdout --
	* [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:35:36.923741 1614600 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:35:36.923916 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.923926 1614600 out.go:374] Setting ErrFile to fd 2...
	I1209 04:35:36.923933 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.924200 1614600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:35:36.924580 1614600 out.go:368] Setting JSON to false
	I1209 04:35:36.925424 1614600 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33477,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:35:36.925503 1614600 start.go:143] virtualization:  
	I1209 04:35:36.929063 1614600 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:35:36.932800 1614600 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:35:36.932938 1614600 notify.go:221] Checking for updates...
	I1209 04:35:36.938644 1614600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:35:36.941493 1614600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:36.944366 1614600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:35:36.947167 1614600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:35:36.949981 1614600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:35:36.953271 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:36.953380 1614600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:35:36.980248 1614600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:35:36.980355 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.042703 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.032815271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.042820 1614600 docker.go:319] overlay module found
	I1209 04:35:37.045833 1614600 out.go:179] * Using the docker driver based on existing profile
	I1209 04:35:37.048621 1614600 start.go:309] selected driver: docker
	I1209 04:35:37.048647 1614600 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.048735 1614600 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:35:37.048847 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.101945 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.092778249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.102371 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:37.102446 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:37.102494 1614600 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.105799 1614600 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:35:37.108781 1614600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:35:37.111778 1614600 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:35:37.114815 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:37.114886 1614600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:35:37.114901 1614600 cache.go:65] Caching tarball of preloaded images
	I1209 04:35:37.114901 1614600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:35:37.114988 1614600 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:35:37.114998 1614600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:35:37.115114 1614600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:35:37.133782 1614600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:35:37.133805 1614600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:35:37.133825 1614600 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:35:37.133858 1614600 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:35:37.133920 1614600 start.go:364] duration metric: took 38.638µs to acquireMachinesLock for "functional-331811"
	I1209 04:35:37.133944 1614600 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:35:37.133953 1614600 fix.go:54] fixHost starting: 
	I1209 04:35:37.134223 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:37.151389 1614600 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:35:37.151428 1614600 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:35:37.154776 1614600 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:35:37.154815 1614600 machine.go:94] provisionDockerMachine start ...
	I1209 04:35:37.154907 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.171646 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.171972 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.171985 1614600 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:35:37.327745 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.327810 1614600 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:35:37.327896 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.347228 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.347562 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.347574 1614600 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:35:37.512164 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.512262 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.529769 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.530100 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.530124 1614600 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:35:37.682808 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:35:37.682838 1614600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:35:37.682870 1614600 ubuntu.go:190] setting up certificates
	I1209 04:35:37.682895 1614600 provision.go:84] configureAuth start
	I1209 04:35:37.682958 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:37.700930 1614600 provision.go:143] copyHostCerts
	I1209 04:35:37.700976 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701008 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:35:37.701021 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701094 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:35:37.701192 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701215 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:35:37.701230 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701259 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:35:37.701304 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701324 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:35:37.701331 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701357 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:35:37.701411 1614600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:35:37.907915 1614600 provision.go:177] copyRemoteCerts
	I1209 04:35:37.907981 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:35:37.908038 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.925118 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.031668 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:35:38.031745 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:35:38.051846 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:35:38.051953 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:35:38.075178 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:35:38.075249 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:35:38.102039 1614600 provision.go:87] duration metric: took 419.115897ms to configureAuth
	I1209 04:35:38.102117 1614600 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:35:38.102384 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:38.102539 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.125059 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:38.125376 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:38.125391 1614600 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:35:38.471803 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:35:38.471824 1614600 machine.go:97] duration metric: took 1.317001735s to provisionDockerMachine
	I1209 04:35:38.471836 1614600 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:35:38.471848 1614600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:35:38.471925 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:35:38.471961 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.490918 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.598660 1614600 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:35:38.602109 1614600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:35:38.602129 1614600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:35:38.602133 1614600 command_runner.go:130] > VERSION_ID="12"
	I1209 04:35:38.602137 1614600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:35:38.602143 1614600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:35:38.602146 1614600 command_runner.go:130] > ID=debian
	I1209 04:35:38.602151 1614600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:35:38.602156 1614600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:35:38.602162 1614600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:35:38.602263 1614600 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:35:38.602312 1614600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:35:38.602329 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:35:38.602392 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:35:38.602478 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:35:38.602488 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:35:38.602561 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:35:38.602585 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> /etc/test/nested/copy/1580521/hosts
	I1209 04:35:38.602639 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:35:38.610143 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:38.627602 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:35:38.644510 1614600 start.go:296] duration metric: took 172.65884ms for postStartSetup
	I1209 04:35:38.644590 1614600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:35:38.644638 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.661666 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.763521 1614600 command_runner.go:130] > 14%
	I1209 04:35:38.763600 1614600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:35:38.767910 1614600 command_runner.go:130] > 169G
	I1209 04:35:38.768419 1614600 fix.go:56] duration metric: took 1.634462107s for fixHost
	I1209 04:35:38.768442 1614600 start.go:83] releasing machines lock for "functional-331811", held for 1.634508761s
	I1209 04:35:38.768510 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:38.785686 1614600 ssh_runner.go:195] Run: cat /version.json
	I1209 04:35:38.785708 1614600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:35:38.785735 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.785760 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.812264 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.824669 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.938034 1614600 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:35:38.938167 1614600 ssh_runner.go:195] Run: systemctl --version
	I1209 04:35:39.026186 1614600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:35:39.029038 1614600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:35:39.029075 1614600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:35:39.029143 1614600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:35:39.066886 1614600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:35:39.071437 1614600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:35:39.071476 1614600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:35:39.071539 1614600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:35:39.079896 1614600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:35:39.079922 1614600 start.go:496] detecting cgroup driver to use...
	I1209 04:35:39.079956 1614600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:35:39.080020 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:35:39.095690 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:35:39.109020 1614600 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:35:39.109092 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:35:39.124696 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:35:39.138081 1614600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:35:39.247127 1614600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:35:39.364113 1614600 docker.go:234] disabling docker service ...
	I1209 04:35:39.364202 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:35:39.381227 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:35:39.394458 1614600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:35:39.513409 1614600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:35:39.656760 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:35:39.669700 1614600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:35:39.682849 1614600 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 04:35:39.684261 1614600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:35:39.684369 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.693327 1614600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:35:39.693420 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.702710 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.711893 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.720974 1614600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:35:39.729134 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.738010 1614600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.746818 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.757592 1614600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:35:39.764510 1614600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:35:39.765518 1614600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:35:39.773280 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:39.885186 1614600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:35:40.065444 1614600 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:35:40.065521 1614600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:35:40.069680 1614600 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 04:35:40.069719 1614600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:35:40.069751 1614600 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1209 04:35:40.069764 1614600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:40.069773 1614600 command_runner.go:130] > Access: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069780 1614600 command_runner.go:130] > Modify: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069788 1614600 command_runner.go:130] > Change: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069792 1614600 command_runner.go:130] >  Birth: -
	I1209 04:35:40.069850 1614600 start.go:564] Will wait 60s for crictl version
	I1209 04:35:40.069925 1614600 ssh_runner.go:195] Run: which crictl
	I1209 04:35:40.073554 1614600 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:35:40.073791 1614600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:35:40.095945 1614600 command_runner.go:130] > Version:  0.1.0
	I1209 04:35:40.096030 1614600 command_runner.go:130] > RuntimeName:  cri-o
	I1209 04:35:40.096051 1614600 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1209 04:35:40.096074 1614600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:35:40.098378 1614600 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:35:40.098514 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.127067 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.127092 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.127099 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.127105 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.127110 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.127137 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.127156 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.127168 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.127172 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.127180 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.127185 1614600 command_runner.go:130] >      static
	I1209 04:35:40.127194 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.127198 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.127227 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.127238 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.127242 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.127250 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.127255 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.127262 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.127267 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.129252 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.157358 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.157406 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.157412 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.157417 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.157423 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.157427 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.157432 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.157472 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.157484 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.157489 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.157492 1614600 command_runner.go:130] >      static
	I1209 04:35:40.157496 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.157508 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.157512 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.157516 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.157547 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.157557 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.157562 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.157567 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.157573 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.164627 1614600 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:35:40.167496 1614600 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:35:40.183934 1614600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:35:40.187985 1614600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:35:40.188113 1614600 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:35:40.188232 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:40.188297 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.225616 1614600 command_runner.go:130] > {
	I1209 04:35:40.225636 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.225641 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225650 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.225655 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225670 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.225673 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225678 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225687 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.225695 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.225699 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225704 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.225711 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225716 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225719 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225723 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225729 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.225733 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225738 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.225742 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225751 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225760 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.225769 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.225773 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225777 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.225781 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225789 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225792 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225795 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225802 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.225806 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225811 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.225814 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225818 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225826 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.225835 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.225838 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225842 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.225847 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.225851 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225854 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225857 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225864 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.225868 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225872 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.225881 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225885 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225897 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.225905 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.225909 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225913 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.225916 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225920 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225923 1614600 command_runner.go:130] >       },
	I1209 04:35:40.225931 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225936 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225939 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225942 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225949 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.225953 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225958 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.225961 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225965 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225973 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.225981 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.225983 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225987 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.225991 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225995 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225998 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226001 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226005 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226008 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226011 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226018 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.226021 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226027 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.226030 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226037 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226045 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.226054 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.226057 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226062 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.226065 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226069 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226072 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226076 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226080 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226082 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226085 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226092 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.226096 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226101 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.226104 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226108 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226115 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.226123 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.226126 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226130 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.226133 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226137 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226140 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226143 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226149 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.226153 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226159 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.226162 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226166 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226174 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.226196 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.226200 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226207 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.226210 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226214 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226218 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226222 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226226 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226228 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226232 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226238 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.226242 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226246 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.226249 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226253 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226261 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.226269 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.226273 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226277 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.226280 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226284 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.226288 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226294 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226297 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.226301 1614600 command_runner.go:130] >     }
	I1209 04:35:40.226303 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.226307 1614600 command_runner.go:130] > }
	I1209 04:35:40.228010 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.228035 1614600 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:35:40.228091 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.253311 1614600 command_runner.go:130] > {
	I1209 04:35:40.253331 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.253335 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253349 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.253353 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253360 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.253363 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253367 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253375 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.253383 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.253386 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253391 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.253395 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253400 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253403 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253406 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253412 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.253416 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253421 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.253425 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253429 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253437 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.253445 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.253449 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253453 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.253457 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253463 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253466 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253469 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253476 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.253480 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253485 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.253489 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253492 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253500 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.253508 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.253515 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253519 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.253523 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.253528 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253531 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253534 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253540 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.253544 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253549 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.253553 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253557 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253564 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.253571 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.253574 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253578 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.253581 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253585 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253592 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253600 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253604 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253607 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253611 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253617 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.253621 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253626 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.253629 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253633 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253641 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.253649 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.253651 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253655 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.253659 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253662 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253669 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253672 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253676 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253679 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253682 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253688 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.253691 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253698 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.253701 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253704 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253713 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.253721 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.253724 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253728 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.253731 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253735 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253738 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253742 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253745 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253748 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253751 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253758 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.253762 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253767 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.253770 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253773 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253781 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.253789 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.253792 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253795 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.253799 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253803 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253806 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253812 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253819 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.253823 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253828 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.253831 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253835 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253843 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.253860 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.253863 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253867 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.253870 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253874 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253877 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253881 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253884 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253887 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253890 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253896 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.253900 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253905 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.253908 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253912 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253919 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.253926 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.253929 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253934 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.253937 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253941 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.253944 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253948 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253952 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.253955 1614600 command_runner.go:130] >     }
	I1209 04:35:40.253958 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.253965 1614600 command_runner.go:130] > }
	I1209 04:35:40.254095 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.254103 1614600 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:35:40.254110 1614600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:35:40.254208 1614600 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:35:40.254292 1614600 ssh_runner.go:195] Run: crio config
	I1209 04:35:40.303771 1614600 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 04:35:40.303802 1614600 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 04:35:40.303810 1614600 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 04:35:40.303813 1614600 command_runner.go:130] > #
	I1209 04:35:40.303821 1614600 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 04:35:40.303827 1614600 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 04:35:40.303834 1614600 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 04:35:40.303844 1614600 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 04:35:40.303848 1614600 command_runner.go:130] > # reload'.
	I1209 04:35:40.303854 1614600 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 04:35:40.303865 1614600 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 04:35:40.303872 1614600 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 04:35:40.303882 1614600 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 04:35:40.303886 1614600 command_runner.go:130] > [crio]
	I1209 04:35:40.303892 1614600 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 04:35:40.303900 1614600 command_runner.go:130] > # containers images, in this directory.
	I1209 04:35:40.304039 1614600 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1209 04:35:40.304055 1614600 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 04:35:40.304161 1614600 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1209 04:35:40.304178 1614600 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 04:35:40.304429 1614600 command_runner.go:130] > # imagestore = ""
	I1209 04:35:40.304453 1614600 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 04:35:40.304461 1614600 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 04:35:40.304691 1614600 command_runner.go:130] > # storage_driver = "overlay"
	I1209 04:35:40.304703 1614600 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 04:35:40.304710 1614600 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 04:35:40.304804 1614600 command_runner.go:130] > # storage_option = [
	I1209 04:35:40.305009 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.305024 1614600 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 04:35:40.305032 1614600 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 04:35:40.305284 1614600 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 04:35:40.305301 1614600 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 04:35:40.305327 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 04:35:40.305337 1614600 command_runner.go:130] > # always happen on a node reboot
	I1209 04:35:40.305502 1614600 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 04:35:40.305532 1614600 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 04:35:40.305540 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 04:35:40.305547 1614600 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 04:35:40.305748 1614600 command_runner.go:130] > # version_file_persist = ""
	I1209 04:35:40.305764 1614600 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 04:35:40.305775 1614600 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 04:35:40.306057 1614600 command_runner.go:130] > # internal_wipe = true
	I1209 04:35:40.306082 1614600 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 04:35:40.306090 1614600 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 04:35:40.306271 1614600 command_runner.go:130] > # internal_repair = true
	I1209 04:35:40.306293 1614600 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 04:35:40.306300 1614600 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 04:35:40.306308 1614600 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 04:35:40.306632 1614600 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 04:35:40.306647 1614600 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 04:35:40.306651 1614600 command_runner.go:130] > [crio.api]
	I1209 04:35:40.306663 1614600 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 04:35:40.306916 1614600 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 04:35:40.306934 1614600 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 04:35:40.307148 1614600 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 04:35:40.307163 1614600 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 04:35:40.307169 1614600 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 04:35:40.307396 1614600 command_runner.go:130] > # stream_port = "0"
	I1209 04:35:40.307416 1614600 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 04:35:40.307661 1614600 command_runner.go:130] > # stream_enable_tls = false
	I1209 04:35:40.307682 1614600 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 04:35:40.307871 1614600 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 04:35:40.307887 1614600 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 04:35:40.307900 1614600 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308079 1614600 command_runner.go:130] > # stream_tls_cert = ""
	I1209 04:35:40.308090 1614600 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 04:35:40.308097 1614600 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308297 1614600 command_runner.go:130] > # stream_tls_key = ""
	I1209 04:35:40.308313 1614600 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 04:35:40.308326 1614600 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 04:35:40.308345 1614600 command_runner.go:130] > # automatically pick up the changes.
	I1209 04:35:40.308572 1614600 command_runner.go:130] > # stream_tls_ca = ""
	I1209 04:35:40.308610 1614600 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.308814 1614600 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1209 04:35:40.308835 1614600 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.309085 1614600 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1209 04:35:40.309103 1614600 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 04:35:40.309115 1614600 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 04:35:40.309119 1614600 command_runner.go:130] > [crio.runtime]
	I1209 04:35:40.309126 1614600 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 04:35:40.309132 1614600 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 04:35:40.309143 1614600 command_runner.go:130] > # "nofile=1024:2048"
	I1209 04:35:40.309150 1614600 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 04:35:40.309302 1614600 command_runner.go:130] > # default_ulimits = [
	I1209 04:35:40.309485 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.309504 1614600 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 04:35:40.309688 1614600 command_runner.go:130] > # no_pivot = false
	I1209 04:35:40.309706 1614600 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 04:35:40.309713 1614600 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 04:35:40.310551 1614600 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 04:35:40.310598 1614600 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 04:35:40.310608 1614600 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 04:35:40.310618 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310767 1614600 command_runner.go:130] > # conmon = ""
	I1209 04:35:40.310786 1614600 command_runner.go:130] > # Cgroup setting for conmon
	I1209 04:35:40.310795 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 04:35:40.310806 1614600 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 04:35:40.310814 1614600 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 04:35:40.310835 1614600 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 04:35:40.310842 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310849 1614600 command_runner.go:130] > # conmon_env = [
	I1209 04:35:40.310857 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310866 1614600 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 04:35:40.310873 1614600 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 04:35:40.310879 1614600 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 04:35:40.310886 1614600 command_runner.go:130] > # default_env = [
	I1209 04:35:40.310889 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310895 1614600 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 04:35:40.310907 1614600 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 04:35:40.310914 1614600 command_runner.go:130] > # selinux = false
	I1209 04:35:40.310925 1614600 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 04:35:40.310933 1614600 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1209 04:35:40.310938 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310944 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.310954 1614600 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1209 04:35:40.310963 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310968 1614600 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1209 04:35:40.310974 1614600 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 04:35:40.310984 1614600 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 04:35:40.310991 1614600 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 04:35:40.311002 1614600 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 04:35:40.311007 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311011 1614600 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 04:35:40.311017 1614600 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 04:35:40.311022 1614600 command_runner.go:130] > # the cgroup blockio controller.
	I1209 04:35:40.311028 1614600 command_runner.go:130] > # blockio_config_file = ""
	I1209 04:35:40.311035 1614600 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 04:35:40.311042 1614600 command_runner.go:130] > # blockio parameters.
	I1209 04:35:40.311046 1614600 command_runner.go:130] > # blockio_reload = false
	I1209 04:35:40.311059 1614600 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 04:35:40.311064 1614600 command_runner.go:130] > # irqbalance daemon.
	I1209 04:35:40.311073 1614600 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 04:35:40.311083 1614600 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 04:35:40.311091 1614600 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 04:35:40.311107 1614600 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 04:35:40.311272 1614600 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 04:35:40.311287 1614600 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 04:35:40.311293 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311441 1614600 command_runner.go:130] > # rdt_config_file = ""
	I1209 04:35:40.311462 1614600 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 04:35:40.311467 1614600 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 04:35:40.311477 1614600 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 04:35:40.311487 1614600 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 04:35:40.311493 1614600 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 04:35:40.311505 1614600 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 04:35:40.311514 1614600 command_runner.go:130] > # will be added.
	I1209 04:35:40.311522 1614600 command_runner.go:130] > # default_capabilities = [
	I1209 04:35:40.311525 1614600 command_runner.go:130] > # 	"CHOWN",
	I1209 04:35:40.311531 1614600 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 04:35:40.311535 1614600 command_runner.go:130] > # 	"FSETID",
	I1209 04:35:40.311541 1614600 command_runner.go:130] > # 	"FOWNER",
	I1209 04:35:40.311545 1614600 command_runner.go:130] > # 	"SETGID",
	I1209 04:35:40.311548 1614600 command_runner.go:130] > # 	"SETUID",
	I1209 04:35:40.311573 1614600 command_runner.go:130] > # 	"SETPCAP",
	I1209 04:35:40.311581 1614600 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 04:35:40.311585 1614600 command_runner.go:130] > # 	"KILL",
	I1209 04:35:40.311752 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311769 1614600 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 04:35:40.311777 1614600 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 04:35:40.311784 1614600 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 04:35:40.311790 1614600 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 04:35:40.311796 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311802 1614600 command_runner.go:130] > default_sysctls = [
	I1209 04:35:40.311807 1614600 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 04:35:40.311811 1614600 command_runner.go:130] > ]
	I1209 04:35:40.311823 1614600 command_runner.go:130] > # List of devices on the host that a
	I1209 04:35:40.311829 1614600 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 04:35:40.311833 1614600 command_runner.go:130] > # allowed_devices = [
	I1209 04:35:40.311843 1614600 command_runner.go:130] > # 	"/dev/fuse",
	I1209 04:35:40.311847 1614600 command_runner.go:130] > # 	"/dev/net/tun",
	I1209 04:35:40.311851 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311856 1614600 command_runner.go:130] > # List of additional devices. specified as
	I1209 04:35:40.311863 1614600 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 04:35:40.311870 1614600 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 04:35:40.311876 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311883 1614600 command_runner.go:130] > # additional_devices = [
	I1209 04:35:40.311886 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311896 1614600 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 04:35:40.311900 1614600 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 04:35:40.311903 1614600 command_runner.go:130] > # 	"/etc/cdi",
	I1209 04:35:40.311908 1614600 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 04:35:40.311916 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311923 1614600 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 04:35:40.311929 1614600 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 04:35:40.311936 1614600 command_runner.go:130] > # Defaults to false.
	I1209 04:35:40.311942 1614600 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 04:35:40.311958 1614600 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 04:35:40.311969 1614600 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 04:35:40.311973 1614600 command_runner.go:130] > # hooks_dir = [
	I1209 04:35:40.311980 1614600 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 04:35:40.311986 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311992 1614600 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 04:35:40.312007 1614600 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 04:35:40.312013 1614600 command_runner.go:130] > # its default mounts from the following two files:
	I1209 04:35:40.312021 1614600 command_runner.go:130] > #
	I1209 04:35:40.312027 1614600 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 04:35:40.312034 1614600 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 04:35:40.312039 1614600 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 04:35:40.312045 1614600 command_runner.go:130] > #
	I1209 04:35:40.312051 1614600 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 04:35:40.312057 1614600 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 04:35:40.312065 1614600 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 04:35:40.312074 1614600 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 04:35:40.312077 1614600 command_runner.go:130] > #
	I1209 04:35:40.312081 1614600 command_runner.go:130] > # default_mounts_file = ""
	I1209 04:35:40.312087 1614600 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 04:35:40.312097 1614600 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 04:35:40.312102 1614600 command_runner.go:130] > # pids_limit = -1
	I1209 04:35:40.312108 1614600 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 04:35:40.312120 1614600 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 04:35:40.312128 1614600 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 04:35:40.312137 1614600 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 04:35:40.312275 1614600 command_runner.go:130] > # log_size_max = -1
	I1209 04:35:40.312297 1614600 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 04:35:40.312305 1614600 command_runner.go:130] > # log_to_journald = false
	I1209 04:35:40.312312 1614600 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 04:35:40.312322 1614600 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 04:35:40.312328 1614600 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 04:35:40.312333 1614600 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 04:35:40.312338 1614600 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 04:35:40.312345 1614600 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 04:35:40.312351 1614600 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 04:35:40.312355 1614600 command_runner.go:130] > # read_only = false
	I1209 04:35:40.312361 1614600 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 04:35:40.312373 1614600 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 04:35:40.312378 1614600 command_runner.go:130] > # live configuration reload.
	I1209 04:35:40.312551 1614600 command_runner.go:130] > # log_level = "info"
	I1209 04:35:40.312568 1614600 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 04:35:40.312574 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.312578 1614600 command_runner.go:130] > # log_filter = ""
	I1209 04:35:40.312588 1614600 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312594 1614600 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 04:35:40.312600 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312614 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312622 1614600 command_runner.go:130] > # uid_mappings = ""
	I1209 04:35:40.312629 1614600 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312635 1614600 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 04:35:40.312644 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312652 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312657 1614600 command_runner.go:130] > # gid_mappings = ""
	I1209 04:35:40.312663 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 04:35:40.312670 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312676 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312689 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312694 1614600 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 04:35:40.312706 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 04:35:40.312713 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312719 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312730 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312735 1614600 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 04:35:40.312745 1614600 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 04:35:40.312753 1614600 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 04:35:40.312759 1614600 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 04:35:40.312763 1614600 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 04:35:40.312771 1614600 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 04:35:40.312781 1614600 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 04:35:40.312787 1614600 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 04:35:40.312792 1614600 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 04:35:40.312800 1614600 command_runner.go:130] > # drop_infra_ctr = true
	I1209 04:35:40.312807 1614600 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 04:35:40.312813 1614600 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 04:35:40.312825 1614600 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 04:35:40.312831 1614600 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 04:35:40.312838 1614600 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 04:35:40.312846 1614600 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 04:35:40.312852 1614600 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 04:35:40.312863 1614600 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 04:35:40.312871 1614600 command_runner.go:130] > # shared_cpuset = ""
	I1209 04:35:40.312877 1614600 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 04:35:40.312882 1614600 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 04:35:40.312891 1614600 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 04:35:40.312899 1614600 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 04:35:40.312903 1614600 command_runner.go:130] > # pinns_path = ""
	I1209 04:35:40.312908 1614600 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 04:35:40.312919 1614600 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 04:35:40.312924 1614600 command_runner.go:130] > # enable_criu_support = true
	I1209 04:35:40.312929 1614600 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 04:35:40.312936 1614600 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 04:35:40.312940 1614600 command_runner.go:130] > # enable_pod_events = false
	I1209 04:35:40.312948 1614600 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 04:35:40.312957 1614600 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 04:35:40.312962 1614600 command_runner.go:130] > # default_runtime = "crun"
	I1209 04:35:40.312967 1614600 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 04:35:40.312984 1614600 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 04:35:40.312997 1614600 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 04:35:40.313003 1614600 command_runner.go:130] > # creation as a file is not desired either.
	I1209 04:35:40.313011 1614600 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 04:35:40.313018 1614600 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 04:35:40.313023 1614600 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 04:35:40.313241 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.313258 1614600 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 04:35:40.313265 1614600 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 04:35:40.313271 1614600 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 04:35:40.313279 1614600 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 04:35:40.313282 1614600 command_runner.go:130] > #
	I1209 04:35:40.313287 1614600 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 04:35:40.313298 1614600 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 04:35:40.313303 1614600 command_runner.go:130] > # runtime_type = "oci"
	I1209 04:35:40.313307 1614600 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 04:35:40.313320 1614600 command_runner.go:130] > # inherit_default_runtime = false
	I1209 04:35:40.313326 1614600 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 04:35:40.313335 1614600 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 04:35:40.313340 1614600 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 04:35:40.313344 1614600 command_runner.go:130] > # monitor_env = []
	I1209 04:35:40.313349 1614600 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 04:35:40.313353 1614600 command_runner.go:130] > # allowed_annotations = []
	I1209 04:35:40.313359 1614600 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 04:35:40.313365 1614600 command_runner.go:130] > # no_sync_log = false
	I1209 04:35:40.313369 1614600 command_runner.go:130] > # default_annotations = {}
	I1209 04:35:40.313373 1614600 command_runner.go:130] > # stream_websockets = false
	I1209 04:35:40.313377 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.313410 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.313420 1614600 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 04:35:40.313427 1614600 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 04:35:40.313440 1614600 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 04:35:40.313446 1614600 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 04:35:40.313450 1614600 command_runner.go:130] > #   in $PATH.
	I1209 04:35:40.313457 1614600 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 04:35:40.313465 1614600 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 04:35:40.313471 1614600 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 04:35:40.313477 1614600 command_runner.go:130] > #   state.
	I1209 04:35:40.313484 1614600 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 04:35:40.313498 1614600 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 04:35:40.313505 1614600 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1209 04:35:40.313515 1614600 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1209 04:35:40.313521 1614600 command_runner.go:130] > #   the values from the default runtime on load time.
	I1209 04:35:40.313528 1614600 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 04:35:40.313537 1614600 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 04:35:40.313543 1614600 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 04:35:40.313550 1614600 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 04:35:40.313558 1614600 command_runner.go:130] > #   The currently recognized values are:
	I1209 04:35:40.313565 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 04:35:40.313575 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 04:35:40.313584 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 04:35:40.313591 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 04:35:40.313599 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 04:35:40.313611 1614600 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 04:35:40.313618 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 04:35:40.313632 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 04:35:40.313638 1614600 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 04:35:40.313644 1614600 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1209 04:35:40.313651 1614600 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1209 04:35:40.313662 1614600 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1209 04:35:40.313668 1614600 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1209 04:35:40.313674 1614600 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1209 04:35:40.313684 1614600 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1209 04:35:40.313693 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1209 04:35:40.313703 1614600 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 04:35:40.313707 1614600 command_runner.go:130] > #   deprecated option "conmon".
	I1209 04:35:40.313715 1614600 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 04:35:40.313721 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 04:35:40.313730 1614600 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 04:35:40.313735 1614600 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 04:35:40.313742 1614600 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1209 04:35:40.313752 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 04:35:40.313763 1614600 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1209 04:35:40.313771 1614600 command_runner.go:130] > #   conmon-rs by using:
	I1209 04:35:40.313779 1614600 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1209 04:35:40.313788 1614600 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1209 04:35:40.313799 1614600 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1209 04:35:40.313806 1614600 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 04:35:40.313811 1614600 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 04:35:40.313818 1614600 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1209 04:35:40.313825 1614600 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1209 04:35:40.313830 1614600 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1209 04:35:40.313842 1614600 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1209 04:35:40.313852 1614600 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1209 04:35:40.313860 1614600 command_runner.go:130] > #   when a machine crash happens.
	I1209 04:35:40.313868 1614600 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1209 04:35:40.313881 1614600 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1209 04:35:40.313889 1614600 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1209 04:35:40.313894 1614600 command_runner.go:130] > #   seccomp profile for the runtime.
	I1209 04:35:40.313900 1614600 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1209 04:35:40.313911 1614600 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1209 04:35:40.313915 1614600 command_runner.go:130] > #
	I1209 04:35:40.313919 1614600 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 04:35:40.313927 1614600 command_runner.go:130] > #
	I1209 04:35:40.313934 1614600 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 04:35:40.313942 1614600 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 04:35:40.313949 1614600 command_runner.go:130] > #
	I1209 04:35:40.313955 1614600 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 04:35:40.313962 1614600 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 04:35:40.313965 1614600 command_runner.go:130] > #
	I1209 04:35:40.313971 1614600 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 04:35:40.313974 1614600 command_runner.go:130] > # feature.
	I1209 04:35:40.313977 1614600 command_runner.go:130] > #
	I1209 04:35:40.313983 1614600 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 04:35:40.313992 1614600 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 04:35:40.314004 1614600 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 04:35:40.314014 1614600 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 04:35:40.314021 1614600 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 04:35:40.314029 1614600 command_runner.go:130] > #
	I1209 04:35:40.314036 1614600 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 04:35:40.314042 1614600 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 04:35:40.314045 1614600 command_runner.go:130] > #
	I1209 04:35:40.314051 1614600 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 04:35:40.314057 1614600 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 04:35:40.314063 1614600 command_runner.go:130] > #
	I1209 04:35:40.314070 1614600 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 04:35:40.314076 1614600 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 04:35:40.314083 1614600 command_runner.go:130] > # limitation.
	I1209 04:35:40.314088 1614600 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1209 04:35:40.314093 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1209 04:35:40.314104 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314108 1614600 command_runner.go:130] > runtime_root = "/run/crun"
	I1209 04:35:40.314112 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314120 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314124 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314130 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314134 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314138 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314142 1614600 command_runner.go:130] > allowed_annotations = [
	I1209 04:35:40.314152 1614600 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1209 04:35:40.314155 1614600 command_runner.go:130] > ]
	I1209 04:35:40.314159 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314164 1614600 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 04:35:40.314172 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1209 04:35:40.314177 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314181 1614600 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 04:35:40.314191 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314195 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314203 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314208 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314211 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314215 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314219 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314440 1614600 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 04:35:40.314455 1614600 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 04:35:40.314461 1614600 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 04:35:40.314470 1614600 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 04:35:40.314481 1614600 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1209 04:35:40.314491 1614600 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1209 04:35:40.314503 1614600 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1209 04:35:40.314509 1614600 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 04:35:40.314523 1614600 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 04:35:40.314532 1614600 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 04:35:40.314548 1614600 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 04:35:40.314556 1614600 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 04:35:40.314560 1614600 command_runner.go:130] > # Example:
	I1209 04:35:40.314565 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 04:35:40.314584 1614600 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 04:35:40.314596 1614600 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 04:35:40.314602 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 04:35:40.314611 1614600 command_runner.go:130] > # cpuset = "0-1"
	I1209 04:35:40.314615 1614600 command_runner.go:130] > # cpushares = "5"
	I1209 04:35:40.314619 1614600 command_runner.go:130] > # cpuquota = "1000"
	I1209 04:35:40.314623 1614600 command_runner.go:130] > # cpuperiod = "100000"
	I1209 04:35:40.314627 1614600 command_runner.go:130] > # cpulimit = "35"
	I1209 04:35:40.314630 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.314634 1614600 command_runner.go:130] > # The workload name is workload-type.
	I1209 04:35:40.314642 1614600 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 04:35:40.314651 1614600 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 04:35:40.314657 1614600 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 04:35:40.314665 1614600 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 04:35:40.314675 1614600 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 04:35:40.314680 1614600 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 04:35:40.314688 1614600 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 04:35:40.314695 1614600 command_runner.go:130] > # Default value is set to true
	I1209 04:35:40.314700 1614600 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 04:35:40.314706 1614600 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 04:35:40.314710 1614600 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 04:35:40.314715 1614600 command_runner.go:130] > # Default value is set to 'false'
	I1209 04:35:40.314719 1614600 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 04:35:40.314731 1614600 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1209 04:35:40.314740 1614600 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1209 04:35:40.314747 1614600 command_runner.go:130] > # timezone = ""
	I1209 04:35:40.314754 1614600 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 04:35:40.314757 1614600 command_runner.go:130] > #
	I1209 04:35:40.314763 1614600 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 04:35:40.314777 1614600 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1209 04:35:40.314781 1614600 command_runner.go:130] > [crio.image]
	I1209 04:35:40.314787 1614600 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 04:35:40.314791 1614600 command_runner.go:130] > # default_transport = "docker://"
	I1209 04:35:40.314797 1614600 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 04:35:40.314810 1614600 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314814 1614600 command_runner.go:130] > # global_auth_file = ""
	I1209 04:35:40.314819 1614600 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 04:35:40.314829 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314834 1614600 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.314841 1614600 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 04:35:40.314852 1614600 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314858 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314863 1614600 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 04:35:40.314868 1614600 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 04:35:40.314875 1614600 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 04:35:40.314888 1614600 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 04:35:40.314904 1614600 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 04:35:40.314909 1614600 command_runner.go:130] > # pause_command = "/pause"
	I1209 04:35:40.314915 1614600 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 04:35:40.314924 1614600 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 04:35:40.314931 1614600 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 04:35:40.314942 1614600 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 04:35:40.314949 1614600 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 04:35:40.314955 1614600 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 04:35:40.314959 1614600 command_runner.go:130] > # pinned_images = [
	I1209 04:35:40.314961 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.314968 1614600 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 04:35:40.314978 1614600 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 04:35:40.314984 1614600 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 04:35:40.314995 1614600 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 04:35:40.315001 1614600 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 04:35:40.315011 1614600 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1209 04:35:40.315023 1614600 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 04:35:40.315031 1614600 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 04:35:40.315037 1614600 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 04:35:40.315049 1614600 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 04:35:40.315055 1614600 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 04:35:40.315065 1614600 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 04:35:40.315071 1614600 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 04:35:40.315078 1614600 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 04:35:40.315086 1614600 command_runner.go:130] > # changing them here.
	I1209 04:35:40.315091 1614600 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1209 04:35:40.315095 1614600 command_runner.go:130] > # insecure_registries = [
	I1209 04:35:40.315099 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315108 1614600 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 04:35:40.315114 1614600 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 04:35:40.315319 1614600 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 04:35:40.315344 1614600 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 04:35:40.315350 1614600 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 04:35:40.315355 1614600 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1209 04:35:40.315362 1614600 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1209 04:35:40.315367 1614600 command_runner.go:130] > # auto_reload_registries = false
	I1209 04:35:40.315372 1614600 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1209 04:35:40.315381 1614600 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1209 04:35:40.315390 1614600 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1209 04:35:40.315399 1614600 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1209 04:35:40.315404 1614600 command_runner.go:130] > # The mode of short name resolution.
	I1209 04:35:40.315411 1614600 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1209 04:35:40.315422 1614600 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1209 04:35:40.315430 1614600 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1209 04:35:40.315434 1614600 command_runner.go:130] > # short_name_mode = "enforcing"
	I1209 04:35:40.315440 1614600 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1209 04:35:40.315446 1614600 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1209 04:35:40.315450 1614600 command_runner.go:130] > # oci_artifact_mount_support = true
	I1209 04:35:40.315456 1614600 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 04:35:40.315460 1614600 command_runner.go:130] > # CNI plugins.
	I1209 04:35:40.315463 1614600 command_runner.go:130] > [crio.network]
	I1209 04:35:40.315469 1614600 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 04:35:40.315475 1614600 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 04:35:40.315482 1614600 command_runner.go:130] > # cni_default_network = ""
	I1209 04:35:40.315488 1614600 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 04:35:40.315493 1614600 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 04:35:40.315503 1614600 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 04:35:40.315507 1614600 command_runner.go:130] > # plugin_dirs = [
	I1209 04:35:40.315515 1614600 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 04:35:40.315519 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315526 1614600 command_runner.go:130] > # List of included pod metrics.
	I1209 04:35:40.315530 1614600 command_runner.go:130] > # included_pod_metrics = [
	I1209 04:35:40.315533 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315539 1614600 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 04:35:40.315542 1614600 command_runner.go:130] > [crio.metrics]
	I1209 04:35:40.315547 1614600 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 04:35:40.315552 1614600 command_runner.go:130] > # enable_metrics = false
	I1209 04:35:40.315562 1614600 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 04:35:40.315567 1614600 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 04:35:40.315573 1614600 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 04:35:40.315587 1614600 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 04:35:40.315593 1614600 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 04:35:40.315601 1614600 command_runner.go:130] > # metrics_collectors = [
	I1209 04:35:40.315605 1614600 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 04:35:40.315610 1614600 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 04:35:40.315614 1614600 command_runner.go:130] > # 	"containers_oom_total",
	I1209 04:35:40.315617 1614600 command_runner.go:130] > # 	"processes_defunct",
	I1209 04:35:40.315621 1614600 command_runner.go:130] > # 	"operations_total",
	I1209 04:35:40.315626 1614600 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 04:35:40.315630 1614600 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 04:35:40.315635 1614600 command_runner.go:130] > # 	"operations_errors_total",
	I1209 04:35:40.315638 1614600 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 04:35:40.315642 1614600 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 04:35:40.315646 1614600 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 04:35:40.315651 1614600 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 04:35:40.315661 1614600 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 04:35:40.315666 1614600 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 04:35:40.315675 1614600 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 04:35:40.315849 1614600 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 04:35:40.315864 1614600 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1209 04:35:40.315868 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315880 1614600 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1209 04:35:40.315884 1614600 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1209 04:35:40.315889 1614600 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 04:35:40.315893 1614600 command_runner.go:130] > # metrics_port = 9090
	I1209 04:35:40.315899 1614600 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 04:35:40.315907 1614600 command_runner.go:130] > # metrics_socket = ""
	I1209 04:35:40.315912 1614600 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 04:35:40.315921 1614600 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 04:35:40.315929 1614600 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 04:35:40.315937 1614600 command_runner.go:130] > # certificate on any modification event.
	I1209 04:35:40.315944 1614600 command_runner.go:130] > # metrics_cert = ""
	I1209 04:35:40.315953 1614600 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 04:35:40.315959 1614600 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 04:35:40.315968 1614600 command_runner.go:130] > # metrics_key = ""
	I1209 04:35:40.315974 1614600 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 04:35:40.315982 1614600 command_runner.go:130] > [crio.tracing]
	I1209 04:35:40.315987 1614600 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 04:35:40.315996 1614600 command_runner.go:130] > # enable_tracing = false
	I1209 04:35:40.316002 1614600 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 04:35:40.316009 1614600 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1209 04:35:40.316017 1614600 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 04:35:40.316027 1614600 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 04:35:40.316032 1614600 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 04:35:40.316035 1614600 command_runner.go:130] > [crio.nri]
	I1209 04:35:40.316040 1614600 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 04:35:40.316043 1614600 command_runner.go:130] > # enable_nri = true
	I1209 04:35:40.316047 1614600 command_runner.go:130] > # NRI socket to listen on.
	I1209 04:35:40.316051 1614600 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 04:35:40.316055 1614600 command_runner.go:130] > # NRI plugin directory to use.
	I1209 04:35:40.316064 1614600 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 04:35:40.316069 1614600 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 04:35:40.316077 1614600 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 04:35:40.316083 1614600 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 04:35:40.316147 1614600 command_runner.go:130] > # nri_disable_connections = false
	I1209 04:35:40.316157 1614600 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 04:35:40.316162 1614600 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 04:35:40.316185 1614600 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 04:35:40.316193 1614600 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 04:35:40.316198 1614600 command_runner.go:130] > # NRI default validator configuration.
	I1209 04:35:40.316205 1614600 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1209 04:35:40.316215 1614600 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1209 04:35:40.316220 1614600 command_runner.go:130] > # can be restricted/rejected:
	I1209 04:35:40.316224 1614600 command_runner.go:130] > # - OCI hook injection
	I1209 04:35:40.316233 1614600 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1209 04:35:40.316238 1614600 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1209 04:35:40.316243 1614600 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1209 04:35:40.316247 1614600 command_runner.go:130] > # - adjustment of linux namespaces
	I1209 04:35:40.316254 1614600 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1209 04:35:40.316264 1614600 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1209 04:35:40.316271 1614600 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1209 04:35:40.316277 1614600 command_runner.go:130] > #
	I1209 04:35:40.316282 1614600 command_runner.go:130] > # [crio.nri.default_validator]
	I1209 04:35:40.316290 1614600 command_runner.go:130] > # nri_enable_default_validator = false
	I1209 04:35:40.316295 1614600 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1209 04:35:40.316307 1614600 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1209 04:35:40.316317 1614600 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1209 04:35:40.316322 1614600 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1209 04:35:40.316327 1614600 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1209 04:35:40.316480 1614600 command_runner.go:130] > # nri_validator_required_plugins = [
	I1209 04:35:40.316508 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.316521 1614600 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1209 04:35:40.316528 1614600 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 04:35:40.316540 1614600 command_runner.go:130] > [crio.stats]
	I1209 04:35:40.316546 1614600 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 04:35:40.316551 1614600 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 04:35:40.316555 1614600 command_runner.go:130] > # stats_collection_period = 0
	I1209 04:35:40.316562 1614600 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1209 04:35:40.316572 1614600 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1209 04:35:40.316577 1614600 command_runner.go:130] > # collection_period = 0
	I1209 04:35:40.318311 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282255082Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1209 04:35:40.318330 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.2822971Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1209 04:35:40.318340 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282328904Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1209 04:35:40.318349 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282355243Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1209 04:35:40.318358 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282430665Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:40.318367 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282713695Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1209 04:35:40.318382 1614600 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 04:35:40.318459 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:40.318484 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:40.318506 1614600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:35:40.318532 1614600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:35:40.318689 1614600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:35:40.318765 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:35:40.328360 1614600 command_runner.go:130] > kubeadm
	I1209 04:35:40.328381 1614600 command_runner.go:130] > kubectl
	I1209 04:35:40.328387 1614600 command_runner.go:130] > kubelet
	I1209 04:35:40.329285 1614600 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:35:40.329353 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:35:40.336944 1614600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:35:40.349970 1614600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:35:40.362809 1614600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1209 04:35:40.375503 1614600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:35:40.379345 1614600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:35:40.379778 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:40.502305 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:41.326409 1614600 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:35:41.326563 1614600 certs.go:195] generating shared ca certs ...
	I1209 04:35:41.326611 1614600 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:41.326833 1614600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:35:41.326887 1614600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:35:41.326895 1614600 certs.go:257] generating profile certs ...
	I1209 04:35:41.327067 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:35:41.327129 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:35:41.327233 1614600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:35:41.327250 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:35:41.327267 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:35:41.327279 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:35:41.327290 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:35:41.327349 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:35:41.327367 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:35:41.327413 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:35:41.327427 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:35:41.327509 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:35:41.327593 1614600 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:35:41.327604 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:35:41.327677 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:35:41.327750 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:35:41.327813 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:35:41.327913 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:41.327983 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.328001 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.328047 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.328720 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:35:41.349998 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:35:41.370613 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:35:41.391438 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:35:41.410483 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:35:41.429428 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:35:41.449234 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:35:41.468289 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:35:41.486148 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:35:41.504497 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:35:41.523111 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:35:41.542281 1614600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:35:41.555566 1614600 ssh_runner.go:195] Run: openssl version
	I1209 04:35:41.561986 1614600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:35:41.562090 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.569846 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:35:41.577817 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581778 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581849 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581927 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.622889 1614600 command_runner.go:130] > 51391683
	I1209 04:35:41.623441 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:35:41.630995 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.638454 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:35:41.646110 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649703 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649815 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649886 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.690940 1614600 command_runner.go:130] > 3ec20f2e
	I1209 04:35:41.691023 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:35:41.698710 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.705943 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:35:41.713451 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717157 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717250 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717310 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.757537 1614600 command_runner.go:130] > b5213941
	I1209 04:35:41.757976 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:35:41.765482 1614600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769213 1614600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769237 1614600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:35:41.769244 1614600 command_runner.go:130] > Device: 259,1	Inode: 1322432     Links: 1
	I1209 04:35:41.769251 1614600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:41.769256 1614600 command_runner.go:130] > Access: 2025-12-09 04:31:33.728838377 +0000
	I1209 04:35:41.769262 1614600 command_runner.go:130] > Modify: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769267 1614600 command_runner.go:130] > Change: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769272 1614600 command_runner.go:130] >  Birth: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769363 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:35:41.810027 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.810619 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:35:41.851168 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.851713 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:35:41.892758 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.892839 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:35:41.938176 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.938689 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:35:41.979665 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.980184 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:35:42.021167 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:42.021686 1614600 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:42.021825 1614600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:35:42.021936 1614600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:35:42.052115 1614600 cri.go:89] found id: ""
	I1209 04:35:42.052191 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:35:42.060116 1614600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:35:42.060196 1614600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:35:42.060220 1614600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:35:42.061227 1614600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:35:42.061247 1614600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:35:42.061342 1614600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:35:42.070417 1614600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:35:42.071064 1614600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.071256 1614600 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "functional-331811" cluster setting kubeconfig missing "functional-331811" context setting]
	I1209 04:35:42.071646 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.072159 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.072417 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.073140 1614600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:35:42.073224 1614600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:35:42.073266 1614600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:35:42.073391 1614600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:35:42.073418 1614600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:35:42.073437 1614600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:35:42.073813 1614600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:35:42.085766 1614600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:35:42.085868 1614600 kubeadm.go:602] duration metric: took 24.612846ms to restartPrimaryControlPlane
	I1209 04:35:42.085898 1614600 kubeadm.go:403] duration metric: took 64.220222ms to StartCluster
	I1209 04:35:42.085947 1614600 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.086095 1614600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.086834 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.087380 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:42.087524 1614600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:35:42.087628 1614600 addons.go:70] Setting storage-provisioner=true in profile "functional-331811"
	I1209 04:35:42.087691 1614600 addons.go:239] Setting addon storage-provisioner=true in "functional-331811"
	I1209 04:35:42.087740 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.088325 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.087482 1614600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:35:42.089019 1614600 addons.go:70] Setting default-storageclass=true in profile "functional-331811"
	I1209 04:35:42.089039 1614600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-331811"
	I1209 04:35:42.089353 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.092155 1614600 out.go:179] * Verifying Kubernetes components...
	I1209 04:35:42.095248 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:42.128430 1614600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:35:42.131623 1614600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.131651 1614600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:35:42.131731 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.147694 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.147902 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.148207 1614600 addons.go:239] Setting addon default-storageclass=true in "functional-331811"
	I1209 04:35:42.148248 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.148712 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.182846 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.193184 1614600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:42.193209 1614600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:35:42.193289 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.220341 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.327312 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:42.346850 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.376931 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.076226 1614600 node_ready.go:35] waiting up to 6m0s for node "functional-331811" to be "Ready" ...
	I1209 04:35:43.076344 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.076396 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.076607 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076635 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076655 1614600 retry.go:31] will retry after 310.700454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076685 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076702 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076708 1614600 retry.go:31] will retry after 282.763546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076773 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.360393 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.387801 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.432930 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.433022 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.433059 1614600 retry.go:31] will retry after 489.220325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460835 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.460941 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460967 1614600 retry.go:31] will retry after 355.931225ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.577252 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.577329 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.577711 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.817107 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.911473 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.915604 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.915640 1614600 retry.go:31] will retry after 537.488813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.922787 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.976592 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.980371 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.980407 1614600 retry.go:31] will retry after 753.380628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.077073 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.453574 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:44.512034 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.512090 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.512116 1614600 retry.go:31] will retry after 707.625417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.577247 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.577656 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.734008 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:44.795873 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.795936 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.795960 1614600 retry.go:31] will retry after 1.127913267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.077396 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.077480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.077910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:45.077993 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:45.220540 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:45.296909 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.296951 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.296996 1614600 retry.go:31] will retry after 917.152391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.577366 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.577441 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:45.924157 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:45.995176 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.995217 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.995239 1614600 retry.go:31] will retry after 1.420775217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.077446 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.077526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.077798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:46.215234 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:46.279745 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:46.279823 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.279850 1614600 retry.go:31] will retry after 1.336322791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.577242 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.577341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.577688 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.077361 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.077723 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.416255 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:47.477013 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.480365 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.480397 1614600 retry.go:31] will retry after 2.174557655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:47.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:47.617100 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:47.681529 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.681577 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.681598 1614600 retry.go:31] will retry after 3.276200411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:48.077115 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.077203 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.077555 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:48.577382 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.577481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.577821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.076798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.576988 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.655381 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:49.715000 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:49.715035 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:49.715054 1614600 retry.go:31] will retry after 3.337758974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:50.077421 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.077518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.077847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:50.077903 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:50.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.576630 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.576967 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:50.958720 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:51.022646 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:51.022681 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.022700 1614600 retry.go:31] will retry after 4.624703928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.077048 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.077142 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.077474 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:51.577259 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.577334 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.577661 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.076656 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:52.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:53.053753 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:53.077246 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.077324 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.077594 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:53.113242 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:53.113284 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.113306 1614600 retry.go:31] will retry after 2.734988542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.576526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.576551 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.577004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:54.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:55.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.076500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.076811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.576596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.648391 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:55.705094 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.708789 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.708820 1614600 retry.go:31] will retry after 6.736330921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.849034 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:55.918734 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.918780 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.918800 1614600 retry.go:31] will retry after 8.152075725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:56.077153 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.077636 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:56.577352 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.577427 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:56.577743 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:57.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.077499 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.077829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:57.576552 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.576635 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:59.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.076667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:59.077089 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:59.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.576805 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.076586 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.577002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.076587 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:01.576991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:02.077159 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.077237 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.077605 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:02.446164 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:02.502744 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:02.506462 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.506498 1614600 retry.go:31] will retry after 8.388840508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.576683 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.576758 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.577095 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.076604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.076977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.576704 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.576784 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.577119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:03.577179 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:04.071900 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:04.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.076869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:04.150537 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:04.154620 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.154650 1614600 retry.go:31] will retry after 8.078270125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.577310 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.577452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.577816 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.076556 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.576672 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.576950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:06.076647 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:06.077129 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:06.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.076823 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.076900 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.077209 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.577024 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.577097 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.577441 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:08.077262 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:08.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:08.577265 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.577344 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.577616 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.077482 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.077835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.576413 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.576503 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.076504 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.076593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.076887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.576575 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.576673 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.576991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:10.577053 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:10.895548 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:10.953462 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:10.957148 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:10.957180 1614600 retry.go:31] will retry after 18.757746695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:11.076395 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.076478 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.076772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.576513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.576815 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.076936 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.077309 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.233682 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:12.292817 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:12.296392 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.296423 1614600 retry.go:31] will retry after 20.023788924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.576943 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.577019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.577364 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:12.577421 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:13.077108 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.077239 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.077603 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:13.577256 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.577343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.077313 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.077412 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.077731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.576496 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.576774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:15.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:15.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:15.576474 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.076431 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.076783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.576609 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.576956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:17.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.077082 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.077457 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:17.077514 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:17.577068 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.577144 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.577409 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.077285 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.077383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.077755 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.076597 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.576602 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:19.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:20.076579 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.076658 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.076980 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:20.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.076594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.576638 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.576994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:22.077314 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.077388 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:22.077714 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:22.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.076595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.076934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.576705 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.577060 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.076759 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.076839 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.077254 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.576837 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.576916 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.577306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:24.577364 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:25.077118 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.077190 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.077463 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:25.577272 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.077487 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.077842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.576440 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.576779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:27.076863 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.076944 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.077310 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:27.077367 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:27.577163 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.577241 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.577580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.077311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.077379 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.577399 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.577473 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.577808 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.076514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.576577 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.576646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:29.576955 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:29.715418 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:29.773517 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:29.777518 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:29.777549 1614600 retry.go:31] will retry after 13.466249075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:30.077059 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.077150 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.077512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:30.577014 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.577100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.577433 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.077181 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.077268 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.077521 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.577348 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.577801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:31.577857 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:32.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.076806 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.077154 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:32.320502 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:32.377593 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:32.381870 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.381909 1614600 retry.go:31] will retry after 28.435049856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.577214 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.577283 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.577547 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.077429 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.077516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.077823 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.576632 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.576978 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:34.076485 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.076922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:34.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:34.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.576951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.076511 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.076628 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.076979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.076491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.076926 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.576535 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.576977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:36.577035 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:37.076803 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.076875 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.077215 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:37.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.577125 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.577459 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.077495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.077876 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.576668 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.576989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:39.076692 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.077121 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:39.077180 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:39.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.576532 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.576612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.077052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.576937 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:41.576987 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:42.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.076556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:42.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.576610 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.077002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.244488 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:43.308556 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:43.308599 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.308622 1614600 retry.go:31] will retry after 20.568808948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.577020 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.577099 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.577399 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:43.577456 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:44.077183 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.077280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.077609 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:44.577311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.577390 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.577747 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:45.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.076692 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.077821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1209 04:36:45.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.576880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:46.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.076531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.076837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:46.076889 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:46.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.576859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.076876 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.076949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.077253 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.577001 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.577079 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.577339 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:48.077087 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.077173 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.077495 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:48.077544 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:48.577135 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.577218 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.577531 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.077177 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.077507 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.577363 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.577806 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.076584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.576621 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.577013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:50.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:51.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.077123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:51.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.076970 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.077045 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.077314 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.577272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.577623 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:52.577685 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:53.076390 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.076468 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:53.577353 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.577714 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.076421 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.076508 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:55.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:55.077081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:55.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.577383 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.577451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.577701 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:57.076714 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.076787 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.077117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:57.077170 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:57.576491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.076535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.076850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.076574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.576972 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:59.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:00.076760 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.076863 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.077187 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.576907 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.576998 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.577391 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.817971 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:00.880147 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:00.880206 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:00.880224 1614600 retry.go:31] will retry after 16.46927575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:01.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.076543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.076797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:01.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.576960 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:02.076888 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.076961 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.077278 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:02.077329 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:02.576827 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.576905 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.577242 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.076885 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.077203 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.576886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.878560 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:37:03.937026 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940694 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940802 1614600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:04.077117 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.077194 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.077475 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:04.077526 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:04.577262 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.577353 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.577683 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.076432 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.076859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.576819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.576622 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.576698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.577017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:06.577081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:07.077327 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.077411 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.077929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:07.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.576583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.076696 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.077190 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.576870 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.576949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.577244 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:08.577297 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:09.077127 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.077205 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:09.577337 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.577415 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.577756 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.076539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.076863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:11.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:11.077056 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.576515 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.076918 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.077297 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.577105 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.577178 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.577483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:13.077233 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.077301 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.077597 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:13.077653 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:13.577407 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.577483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.577834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.076503 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.076512 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.076989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:15.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:16.076438 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.076844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:16.576547 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.576641 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.576975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.076966 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.077042 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.077390 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.349771 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:17.409388 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413192 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413302 1614600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:17.416242 1614600 out.go:179] * Enabled addons: 
	I1209 04:37:17.419770 1614600 addons.go:530] duration metric: took 1m35.33224358s for enable addons: enabled=[]
	I1209 04:37:17.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.576800 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:18.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.076914 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:18.076974 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:18.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.076683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:20.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.076704 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.077078 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:20.077138 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:20.576447 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.576514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.076557 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.076645 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:22.076971 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.077046 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.077320 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:22.077371 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:22.577119 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.577200 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.577508 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.077228 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.077302 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.077678 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.577301 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.577385 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.577646 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:24.077387 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.077467 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.077801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:24.077859 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:24.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.076516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.076845 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.576541 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.576634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.576434 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.576510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.576842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:26.576894 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:27.077363 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.077772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:27.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.076461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.076533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.576561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:29.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:29.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:29.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.576604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.576748 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.577148 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:31.577206 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:32.077358 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.077437 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.077778 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:32.576461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.076565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.576689 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.577020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:34.076719 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.076790 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.077129 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:34.077191 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:34.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.576554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.077045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.576555 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.576651 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.076520 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:36.576893 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:37.076700 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:37.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.576566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.576841 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:39.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:39.076952 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:39.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.076451 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.576873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:41.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.076918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:41.076977 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:41.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.576507 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.576804 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.076573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.076649 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.077059 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.576872 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.577233 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:43.077479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.077558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.077870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:43.077918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:43.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.076698 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.076780 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.077140 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.576789 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.576864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.576773 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.576852 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.577196 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:45.577268 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:46.076955 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.077032 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.077330 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:46.577091 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.577484 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.077355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.077435 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.077777 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.576343 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.576413 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.576709 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:48.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.076924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:48.076985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:48.576690 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.576786 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.577139 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.076493 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.076866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.576550 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.076580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.576825 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:50.576876 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:51.076471 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.076547 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.076833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:51.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.077018 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.077092 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.077387 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.577182 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.577265 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.577610 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:52.577668 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:53.077401 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.077481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.077828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:53.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.576594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.076956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.576530 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:55.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.076862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:55.076908 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:55.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.576971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.076686 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.076765 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.077127 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.576603 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.576676 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:57.077077 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.077153 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.077489 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:57.077549 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:57.577277 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.577362 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.076355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.076431 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.076691 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.576837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.076605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.576429 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.576509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:59.576883 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:00.076590 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.076684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.076994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:00.576859 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.576953 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.577331 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.077097 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.077171 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.077483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.577361 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.577744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:01.577806 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:02.076658 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.077088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:02.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.576881 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.076607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.076969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.577088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:04.076785 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.076860 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.077186 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:04.077249 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:04.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.576888 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.076606 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.077018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.576519 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.576866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.076961 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:06.576985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:07.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.076824 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.077084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:07.576464 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.076916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.576683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:09.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:09.077009 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:09.576680 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.576755 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.577084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.076530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.076842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.576484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:11.076596 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.076680 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:11.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:11.576395 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.576732 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.076887 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.076960 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.077284 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.577054 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:13.077220 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.077295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:13.077607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:13.577421 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.577504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.076618 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.076974 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.576647 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.576716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.577024 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.076742 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.076823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.077205 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.577458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:15.577510 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:16.076942 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.077018 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.077298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:16.577080 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.577154 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.577499 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.077222 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.077307 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.077621 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.577360 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.577430 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:17.577730 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:18.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.076948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:18.576659 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.576737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.577070 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.076847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.576553 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:20.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:20.077015 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:20.577408 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.577485 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.577743 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.076529 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.076872 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.576583 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.077118 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.077384 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:22.077433 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:22.577298 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.577383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.577762 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.076896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.076595 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.077017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.576721 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.577172 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:24.577228 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:25.076985 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.077057 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.077316 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:25.577081 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.577159 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.577525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.077428 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.077536 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.077886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.576422 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.576498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.576744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:27.076724 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.076800 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.077105 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:27.077166 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:27.576841 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.576921 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.577195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.576540 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.576965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.076687 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.077094 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:29.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:30.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.076902 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:30.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.576577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.076559 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.076951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:32.077036 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.077110 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.077432 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:32.077494 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:32.577246 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.577331 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.076404 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.076504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.076853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.577018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.076840 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:34.576950 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:35.076629 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.076710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.077057 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:35.576738 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.577124 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.076862 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.076938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.077291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.577100 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.577187 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.577528 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:36.577591 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:37.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.076511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.076779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:37.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.076577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.076985 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:39.076497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.076900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:39.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:39.576601 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.076482 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.076567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.076858 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.076880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.576426 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.576818 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:41.576870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:42.077124 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.077219 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:42.577244 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.577337 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.577684 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.077329 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.077428 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.077706 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.577356 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.577731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:43.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:44.077436 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.077527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.078002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:44.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.076652 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.077429 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.577123 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.577460 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:46.077241 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.077667 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:46.077724 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:46.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.576518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.076726 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.076801 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.077144 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.576574 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.576648 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.076715 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.077126 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.576849 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.576930 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.577268 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:48.577334 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:49.077051 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.077394 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:49.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.577582 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.077370 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.077454 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.077810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.576424 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.576502 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.576796 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:51.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.076910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:51.076969 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:51.576623 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.576749 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.577040 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.077085 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.077160 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.077422 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.577216 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.577295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.577613 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:53.077392 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.077475 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.077797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:53.077856 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:53.576362 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.576448 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.576718 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.076568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.576614 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.576695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.577055 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.076818 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.077132 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.576528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.576901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:55.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:56.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.077039 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:56.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.576717 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.076676 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.076750 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.077090 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.576453 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.576855 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:58.076528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:58.076991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:58.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.077015 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.576391 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.576721 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.076886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:00.577017 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:01.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.077021 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.077100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.577124 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.577217 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.577512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:02.577562 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:03.077323 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.077407 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.077775 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:03.576388 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.576462 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.576801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.076927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:05.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.076614 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.076965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:05.077020 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:05.576441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.576512 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.076541 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.076627 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.076963 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.576692 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.576772 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.577111 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:07.076853 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.076924 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.077177 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:07.077219 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:07.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.576580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.576924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.576669 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.576753 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.577117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:09.577174 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:10.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.076856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:10.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.576584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.576962 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.076664 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.077066 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.576687 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.576941 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:12.077176 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.077252 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:12.077711 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:12.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.576516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.576897 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.076570 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.076642 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.576510 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.576831 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:14.576881 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:15.076545 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.076624 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:15.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.076538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.076835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:16.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:17.076773 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.076853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.077193 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:17.576588 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.576661 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.576992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.076899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.576718 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:18.577182 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:19.076436 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.076822 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:19.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.576983 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:21.076622 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.077074 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:21.077128 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:21.576821 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.576903 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.577234 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.077298 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.077380 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.076525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.576738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:23.576788 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:24.076805 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.076886 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.077219 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:24.577078 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.577155 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.077571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.078098 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.576598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:25.576994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:26.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.076622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:26.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.076774 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.076849 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.077183 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.577039 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.577116 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.577462 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:27.577520 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:28.077111 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.077189 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:28.577185 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.577261 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.577578 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.077440 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.077520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.077849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.576538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:30.076538 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.076629 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.076998 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:30.077061 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:30.576517 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.076955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.576521 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.576916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:32.076880 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.076954 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.077270 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:32.077326 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:32.577067 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.577413 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.077278 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.077360 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.077744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.076561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.576597 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.576678 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.577003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:34.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:35.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.076882 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:35.576445 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.576524 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:37.076840 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.076915 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.077171 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:37.077211 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:37.576860 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.576938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.577250 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.077017 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.077094 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.077417 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.577139 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.577221 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.577485 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:39.077239 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.077314 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.077657 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:39.077722 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:39.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.576520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.576913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.576856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:41.576911 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:42.077042 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.077525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:42.577190 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.577607 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.077049 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:43.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:44.076789 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.076865 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.077206 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:44.576988 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.577058 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.577402 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.077482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.077593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.078175 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.577162 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.577633 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:45.577692 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:46.077289 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.077367 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.077631 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:46.577385 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.577458 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.577783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.076819 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.076895 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.077306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.577090 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.577430 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:48.077209 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.077287 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.077634 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:48.077694 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:48.576414 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.576492 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.576820 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.076429 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.076812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.076634 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.077027 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:50.576904 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:51.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.076954 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:51.576645 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.576720 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.577036 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.077317 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.077391 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.077666 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.576786 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:53.076496 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:53.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:53.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.576834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.076579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.576494 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.576894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.076446 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.076520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.076829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.576458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.576864 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:55.576922 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:56.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.077075 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:56.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.576684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.576957 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.076915 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.076989 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.077329 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.577142 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.577223 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.577545 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:57.577606 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:58.077310 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.077382 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:58.576398 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.576810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.076569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.576814 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:00.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.076707 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.077051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:00.077102 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:00.576782 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.576892 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.577341 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.077110 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.077469 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.577344 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:02.077046 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.077464 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:02.077524 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:02.577200 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.577280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.577554 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.077335 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.077410 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.077751 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.576927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.076986 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.576717 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.577167 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:04.577233 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:05.077000 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.077083 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.077407 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:05.577162 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.577240 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.577561 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.077371 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.077455 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.077846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.576686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.577045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:07.076867 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.076956 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.077237 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:07.077285 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:07.577031 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.577112 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.077143 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.077231 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.077595 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.577327 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.577403 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.577658 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.076843 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.576572 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.576654 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.577008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:09.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:10.076510 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.076592 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.076913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:10.576495 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.577096 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:11.577137 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:12.077236 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.077311 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.077690 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:12.576433 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.076474 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.076548 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.076826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.576589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.576934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:14.076645 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.076722 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:14.077105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:14.576455 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.076587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.076968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.576689 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.576770 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.577097 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.076527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.076791 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.576559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:16.576962 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:17.076730 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.076809 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.077145 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:17.576557 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:19.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.076498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:19.076870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:19.576490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:21.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.076558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.076898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:21.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:21.576671 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.576745 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.577092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.077101 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.077458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.577307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.577395 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.577780 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.076488 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.076566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.076905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.576667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:23.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:24.076716 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.076812 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.077201 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:24.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.577098 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.577427 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.077197 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.077272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.577396 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.577807 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:25.577866 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:26.076551 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.076646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:26.576462 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.576534 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.076897 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.077258 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.577061 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.577148 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:28.077203 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.077282 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.077580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:28.077625 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:28.576412 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.576489 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.576712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.576846 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.577180 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:30.577234 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:31.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.076979 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.077238 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:31.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.577496 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.077307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.077384 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.077722 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.576539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:33.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.076563 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.076911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:33.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:33.576529 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.576968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.076674 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.077041 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.576964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:35.076695 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.077151 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:35.077212 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:35.576777 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.576847 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.577114 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.076516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.076591 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.076925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.576862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.076779 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.076855 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.077112 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:37.576915 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:38.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.076570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:38.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.576523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.576653 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.576731 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.577063 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:39.577116 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:40.076444 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.076518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.076828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:40.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.576874 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.076569 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.077011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.576534 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.576619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:42.077395 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.077483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.077909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:42.078001 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:42.576664 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.576741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.577081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.076642 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.076576 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.076879 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.576597 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:44.576957 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:45.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.076615 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.077092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:45.576710 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.576785 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.577104 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.076542 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.076809 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:47.076788 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.076864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.077245 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:47.077300 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:47.576416 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.576497 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.576797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.076992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.576737 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.076532 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.076827 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.576585 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:49.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:50.076712 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.076793 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.077113 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:50.576457 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.576530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.576900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.076686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.077038 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.576760 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:51.577220 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:52.077315 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.077402 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.077698 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:52.576467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.576558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.576921 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.076656 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.076733 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.576420 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.576495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.576776 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:54.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:54.077005 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:54.576713 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.576789 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.577077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.076751 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.076830 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.077119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:56.076633 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:56.077057 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:56.576546 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.576885 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.076984 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.077287 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.577072 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.577156 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.577468 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:58.077202 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.077274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.077543 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:58.077586 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:58.577422 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.577500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.577833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.076973 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.576658 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.576742 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.577051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.076674 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.576861 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.576941 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.577298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:00.577372 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:01.077099 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.077168 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.077505 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:01.577309 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.577392 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.076372 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.076451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.076749 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.576406 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.576484 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:03.076591 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.076792 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.077195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:03.077250 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:03.576825 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.576906 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.577274 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.076812 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.076893 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.577138 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.577214 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.577536 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:05.077263 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.077343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.077665 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:05.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:05.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.576451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.576771 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.076554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.576878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.076816 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.077173 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.576865 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:07.576918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:08.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.076616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.077003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:08.576544 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.076893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.576500 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.576908 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:09.576964 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:10.076483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.076557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.076873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:10.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.077082 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.576454 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.576850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:12.077093 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.077172 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.077480 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:12.077535 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:12.577297 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.577374 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.577704 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.076405 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.076480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.076737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.076691 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:14.576999 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:15.076684 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.076776 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.077081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:15.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.576853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.577200 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.076920 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.576710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.577052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:16.577105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:17.076817 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:17.576383 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.576453 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.576788 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.076603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.076964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:18.577127 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:19.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.076761 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:19.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.077004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.576633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:21.076661 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.077147 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:21.077209 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:21.576890 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.576967 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.577291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.077436 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.577198 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.577279 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.577606 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.076378 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.076452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.076785 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.576403 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.576491 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:23.576864 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:24.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:24.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.076975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.576506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.576863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:25.576919 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:26.076427 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.076505 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:26.576569 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.076922 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.076997 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.077305 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.577103 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.577175 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.577550 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:27.577607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:28.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.077414 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.077671 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:28.577417 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.577513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.577846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.576905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:30.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.076966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:30.077050 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:30.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.576966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.076669 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.076744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.576502 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.576918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:32.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.077068 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.077435 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:32.077497 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:32.577199 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.577274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.577539 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.077339 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.077443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.077811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.576930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.076861 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.576657 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.577014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:34.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:35.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.076546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.076895 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:35.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.576553 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.577032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:37.076948 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.077019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.077352 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:37.077398 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:37.577132 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.577216 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.577592 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.077367 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.077444 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.077774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.576549 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.076517 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.076596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.576754 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.576834 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.577168 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:39.577222 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:40.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.076703 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.076991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:40.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.576891 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.577374 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.577738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:41.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:42.076410 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.076517 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.076959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:42.576665 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:43.076660 1614600 node_ready.go:38] duration metric: took 6m0.000391304s for node "functional-331811" to be "Ready" ...
	I1209 04:41:43.080060 1614600 out.go:203] 
	W1209 04:41:43.083006 1614600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:41:43.083030 1614600 out.go:285] * 
	* 
	W1209 04:41:43.085173 1614600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:41:43.088614 1614600 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-331811 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.908554237s for "functional-331811" cluster.
I1209 04:41:43.788198 1580521 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (348.520581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 logs -n 25: (1.173177363s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-790468 ssh sudo cat /usr/share/ca-certificates/1580521.pem                                                                                     │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/15805212.pem                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image save kicbase/echo-server:functional-790468 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /usr/share/ca-certificates/15805212.pem                                                                                    │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image rm kicbase/echo-server:functional-790468 --alsologtostderr                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image save --daemon kicbase/echo-server:functional-790468 --alsologtostderr                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format yaml --alsologtostderr                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format short --alsologtostderr                                                                                               │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh pgrep buildkitd                                                                                                                     │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image          │ functional-790468 image ls --format json --alsologtostderr                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format table --alsologtostderr                                                                                               │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                                    │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete         │ -p functional-790468                                                                                                                                      │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start          │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ start          │ -p functional-331811 --alsologtostderr -v=8                                                                                                               │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:35:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:35:36.923741 1614600 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:35:36.923916 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.923926 1614600 out.go:374] Setting ErrFile to fd 2...
	I1209 04:35:36.923933 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.924200 1614600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:35:36.924580 1614600 out.go:368] Setting JSON to false
	I1209 04:35:36.925424 1614600 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33477,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:35:36.925503 1614600 start.go:143] virtualization:  
	I1209 04:35:36.929063 1614600 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:35:36.932800 1614600 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:35:36.932938 1614600 notify.go:221] Checking for updates...
	I1209 04:35:36.938644 1614600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:35:36.941493 1614600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:36.944366 1614600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:35:36.947167 1614600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:35:36.949981 1614600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:35:36.953271 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:36.953380 1614600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:35:36.980248 1614600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:35:36.980355 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.042703 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.032815271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.042820 1614600 docker.go:319] overlay module found
	I1209 04:35:37.045833 1614600 out.go:179] * Using the docker driver based on existing profile
	I1209 04:35:37.048621 1614600 start.go:309] selected driver: docker
	I1209 04:35:37.048647 1614600 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.048735 1614600 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:35:37.048847 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.101945 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.092778249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.102371 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:37.102446 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:37.102494 1614600 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.105799 1614600 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:35:37.108781 1614600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:35:37.111778 1614600 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:35:37.114815 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:37.114886 1614600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:35:37.114901 1614600 cache.go:65] Caching tarball of preloaded images
	I1209 04:35:37.114901 1614600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:35:37.114988 1614600 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:35:37.114998 1614600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:35:37.115114 1614600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:35:37.133782 1614600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:35:37.133805 1614600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:35:37.133825 1614600 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:35:37.133858 1614600 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:35:37.133920 1614600 start.go:364] duration metric: took 38.638µs to acquireMachinesLock for "functional-331811"
	I1209 04:35:37.133944 1614600 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:35:37.133953 1614600 fix.go:54] fixHost starting: 
	I1209 04:35:37.134223 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:37.151389 1614600 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:35:37.151428 1614600 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:35:37.154776 1614600 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:35:37.154815 1614600 machine.go:94] provisionDockerMachine start ...
	I1209 04:35:37.154907 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.171646 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.171972 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.171985 1614600 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:35:37.327745 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.327810 1614600 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:35:37.327896 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.347228 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.347562 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.347574 1614600 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:35:37.512164 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.512262 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.529769 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.530100 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.530124 1614600 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:35:37.682808 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:35:37.682838 1614600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:35:37.682870 1614600 ubuntu.go:190] setting up certificates
	I1209 04:35:37.682895 1614600 provision.go:84] configureAuth start
	I1209 04:35:37.682958 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:37.700930 1614600 provision.go:143] copyHostCerts
	I1209 04:35:37.700976 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701008 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:35:37.701021 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701094 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:35:37.701192 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701215 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:35:37.701230 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701259 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:35:37.701304 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701324 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:35:37.701331 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701357 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:35:37.701411 1614600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:35:37.907915 1614600 provision.go:177] copyRemoteCerts
	I1209 04:35:37.907981 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:35:37.908038 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.925118 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.031668 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:35:38.031745 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:35:38.051846 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:35:38.051953 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:35:38.075178 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:35:38.075249 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:35:38.102039 1614600 provision.go:87] duration metric: took 419.115897ms to configureAuth
	I1209 04:35:38.102117 1614600 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:35:38.102384 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:38.102539 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.125059 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:38.125376 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:38.125391 1614600 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:35:38.471803 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:35:38.471824 1614600 machine.go:97] duration metric: took 1.317001735s to provisionDockerMachine
	I1209 04:35:38.471836 1614600 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:35:38.471848 1614600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:35:38.471925 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:35:38.471961 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.490918 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.598660 1614600 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:35:38.602109 1614600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:35:38.602129 1614600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:35:38.602133 1614600 command_runner.go:130] > VERSION_ID="12"
	I1209 04:35:38.602137 1614600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:35:38.602143 1614600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:35:38.602146 1614600 command_runner.go:130] > ID=debian
	I1209 04:35:38.602151 1614600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:35:38.602156 1614600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:35:38.602162 1614600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:35:38.602263 1614600 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:35:38.602312 1614600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:35:38.602329 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:35:38.602392 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:35:38.602478 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:35:38.602488 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:35:38.602561 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:35:38.602585 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> /etc/test/nested/copy/1580521/hosts
	I1209 04:35:38.602639 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:35:38.610143 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:38.627602 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:35:38.644510 1614600 start.go:296] duration metric: took 172.65884ms for postStartSetup
	I1209 04:35:38.644590 1614600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:35:38.644638 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.661666 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.763521 1614600 command_runner.go:130] > 14%
	I1209 04:35:38.763600 1614600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:35:38.767910 1614600 command_runner.go:130] > 169G
	I1209 04:35:38.768419 1614600 fix.go:56] duration metric: took 1.634462107s for fixHost
	I1209 04:35:38.768442 1614600 start.go:83] releasing machines lock for "functional-331811", held for 1.634508761s
	I1209 04:35:38.768510 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:38.785686 1614600 ssh_runner.go:195] Run: cat /version.json
	I1209 04:35:38.785708 1614600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:35:38.785735 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.785760 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.812264 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.824669 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.938034 1614600 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:35:38.938167 1614600 ssh_runner.go:195] Run: systemctl --version
	I1209 04:35:39.026186 1614600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:35:39.029038 1614600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:35:39.029075 1614600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:35:39.029143 1614600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:35:39.066886 1614600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:35:39.071437 1614600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:35:39.071476 1614600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:35:39.071539 1614600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:35:39.079896 1614600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:35:39.079922 1614600 start.go:496] detecting cgroup driver to use...
	I1209 04:35:39.079956 1614600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:35:39.080020 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:35:39.095690 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:35:39.109020 1614600 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:35:39.109092 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:35:39.124696 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:35:39.138081 1614600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:35:39.247127 1614600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:35:39.364113 1614600 docker.go:234] disabling docker service ...
	I1209 04:35:39.364202 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:35:39.381227 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:35:39.394458 1614600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:35:39.513409 1614600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:35:39.656760 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:35:39.669700 1614600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:35:39.682849 1614600 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 04:35:39.684261 1614600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:35:39.684369 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.693327 1614600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:35:39.693420 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.702710 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.711893 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.720974 1614600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:35:39.729134 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.738010 1614600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.746818 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.757592 1614600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:35:39.764510 1614600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:35:39.765518 1614600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:35:39.773280 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:39.885186 1614600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:35:40.065444 1614600 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:35:40.065521 1614600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:35:40.069680 1614600 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 04:35:40.069719 1614600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:35:40.069751 1614600 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1209 04:35:40.069764 1614600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:40.069773 1614600 command_runner.go:130] > Access: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069780 1614600 command_runner.go:130] > Modify: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069788 1614600 command_runner.go:130] > Change: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069792 1614600 command_runner.go:130] >  Birth: -
	I1209 04:35:40.069850 1614600 start.go:564] Will wait 60s for crictl version
	I1209 04:35:40.069925 1614600 ssh_runner.go:195] Run: which crictl
	I1209 04:35:40.073554 1614600 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:35:40.073791 1614600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:35:40.095945 1614600 command_runner.go:130] > Version:  0.1.0
	I1209 04:35:40.096030 1614600 command_runner.go:130] > RuntimeName:  cri-o
	I1209 04:35:40.096051 1614600 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1209 04:35:40.096074 1614600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:35:40.098378 1614600 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:35:40.098514 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.127067 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.127092 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.127099 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.127105 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.127110 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.127137 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.127156 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.127168 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.127172 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.127180 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.127185 1614600 command_runner.go:130] >      static
	I1209 04:35:40.127194 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.127198 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.127227 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.127238 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.127242 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.127250 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.127255 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.127262 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.127267 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.129252 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.157358 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.157406 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.157412 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.157417 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.157423 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.157427 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.157432 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.157472 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.157484 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.157489 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.157492 1614600 command_runner.go:130] >      static
	I1209 04:35:40.157496 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.157508 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.157512 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.157516 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.157547 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.157557 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.157562 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.157567 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.157573 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.164627 1614600 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:35:40.167496 1614600 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:35:40.183934 1614600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:35:40.187985 1614600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:35:40.188113 1614600 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:35:40.188232 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:40.188297 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.225616 1614600 command_runner.go:130] > {
	I1209 04:35:40.225636 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.225641 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225650 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.225655 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225670 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.225673 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225678 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225687 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.225695 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.225699 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225704 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.225711 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225716 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225719 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225723 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225729 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.225733 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225738 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.225742 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225751 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225760 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.225769 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.225773 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225777 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.225781 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225789 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225792 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225795 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225802 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.225806 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225811 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.225814 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225818 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225826 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.225835 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.225838 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225842 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.225847 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.225851 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225854 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225857 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225864 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.225868 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225872 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.225881 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225885 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225897 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.225905 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.225909 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225913 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.225916 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225920 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225923 1614600 command_runner.go:130] >       },
	I1209 04:35:40.225931 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225936 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225939 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225942 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225949 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.225953 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225958 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.225961 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225965 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225973 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.225981 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.225983 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225987 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.225991 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225995 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225998 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226001 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226005 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226008 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226011 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226018 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.226021 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226027 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.226030 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226037 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226045 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.226054 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.226057 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226062 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.226065 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226069 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226072 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226076 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226080 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226082 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226085 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226092 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.226096 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226101 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.226104 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226108 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226115 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.226123 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.226126 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226130 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.226133 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226137 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226140 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226143 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226149 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.226153 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226159 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.226162 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226166 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226174 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.226196 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.226200 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226207 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.226210 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226214 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226218 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226222 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226226 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226228 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226232 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226238 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.226242 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226246 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.226249 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226253 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226261 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.226269 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.226273 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226277 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.226280 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226284 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.226288 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226294 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226297 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.226301 1614600 command_runner.go:130] >     }
	I1209 04:35:40.226303 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.226307 1614600 command_runner.go:130] > }
	I1209 04:35:40.228010 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.228035 1614600 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:35:40.228091 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.253311 1614600 command_runner.go:130] > {
	I1209 04:35:40.253331 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.253335 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253349 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.253353 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253360 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.253363 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253367 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253375 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.253383 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.253386 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253391 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.253395 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253400 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253403 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253406 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253412 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.253416 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253421 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.253425 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253429 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253437 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.253445 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.253449 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253453 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.253457 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253463 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253466 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253469 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253476 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.253480 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253485 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.253489 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253492 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253500 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.253508 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.253515 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253519 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.253523 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.253528 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253531 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253534 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253540 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.253544 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253549 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.253553 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253557 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253564 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.253571 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.253574 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253578 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.253581 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253585 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253592 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253600 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253604 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253607 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253611 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253617 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.253621 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253626 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.253629 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253633 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253641 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.253649 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.253651 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253655 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.253659 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253662 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253669 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253672 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253676 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253679 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253682 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253688 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.253691 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253698 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.253701 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253704 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253713 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.253721 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.253724 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253728 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.253731 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253735 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253738 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253742 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253745 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253748 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253751 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253758 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.253762 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253767 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.253770 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253773 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253781 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.253789 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.253792 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253795 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.253799 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253803 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253806 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253812 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253819 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.253823 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253828 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.253831 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253835 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253843 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.253860 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.253863 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253867 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.253870 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253874 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253877 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253881 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253884 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253887 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253890 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253896 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.253900 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253905 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.253908 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253912 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253919 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.253926 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.253929 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253934 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.253937 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253941 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.253944 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253948 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253952 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.253955 1614600 command_runner.go:130] >     }
	I1209 04:35:40.253958 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.253965 1614600 command_runner.go:130] > }
	I1209 04:35:40.254095 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.254103 1614600 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:35:40.254110 1614600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:35:40.254208 1614600 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:35:40.254292 1614600 ssh_runner.go:195] Run: crio config
	I1209 04:35:40.303771 1614600 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 04:35:40.303802 1614600 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 04:35:40.303810 1614600 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 04:35:40.303813 1614600 command_runner.go:130] > #
	I1209 04:35:40.303821 1614600 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 04:35:40.303827 1614600 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 04:35:40.303834 1614600 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 04:35:40.303844 1614600 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 04:35:40.303848 1614600 command_runner.go:130] > # reload'.
	I1209 04:35:40.303854 1614600 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 04:35:40.303865 1614600 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 04:35:40.303872 1614600 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 04:35:40.303882 1614600 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 04:35:40.303886 1614600 command_runner.go:130] > [crio]
	I1209 04:35:40.303892 1614600 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 04:35:40.303900 1614600 command_runner.go:130] > # containers images, in this directory.
	I1209 04:35:40.304039 1614600 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1209 04:35:40.304055 1614600 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 04:35:40.304161 1614600 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1209 04:35:40.304178 1614600 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 04:35:40.304429 1614600 command_runner.go:130] > # imagestore = ""
	I1209 04:35:40.304453 1614600 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 04:35:40.304461 1614600 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 04:35:40.304691 1614600 command_runner.go:130] > # storage_driver = "overlay"
	I1209 04:35:40.304703 1614600 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 04:35:40.304710 1614600 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 04:35:40.304804 1614600 command_runner.go:130] > # storage_option = [
	I1209 04:35:40.305009 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.305024 1614600 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 04:35:40.305032 1614600 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 04:35:40.305284 1614600 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 04:35:40.305301 1614600 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 04:35:40.305327 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 04:35:40.305337 1614600 command_runner.go:130] > # always happen on a node reboot
	I1209 04:35:40.305502 1614600 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 04:35:40.305532 1614600 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 04:35:40.305540 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 04:35:40.305547 1614600 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 04:35:40.305748 1614600 command_runner.go:130] > # version_file_persist = ""
	I1209 04:35:40.305764 1614600 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 04:35:40.305775 1614600 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 04:35:40.306057 1614600 command_runner.go:130] > # internal_wipe = true
	I1209 04:35:40.306082 1614600 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 04:35:40.306090 1614600 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 04:35:40.306271 1614600 command_runner.go:130] > # internal_repair = true
	I1209 04:35:40.306293 1614600 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 04:35:40.306300 1614600 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 04:35:40.306308 1614600 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 04:35:40.306632 1614600 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 04:35:40.306647 1614600 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 04:35:40.306651 1614600 command_runner.go:130] > [crio.api]
	I1209 04:35:40.306663 1614600 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 04:35:40.306916 1614600 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 04:35:40.306934 1614600 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 04:35:40.307148 1614600 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 04:35:40.307163 1614600 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 04:35:40.307169 1614600 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 04:35:40.307396 1614600 command_runner.go:130] > # stream_port = "0"
	I1209 04:35:40.307416 1614600 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 04:35:40.307661 1614600 command_runner.go:130] > # stream_enable_tls = false
	I1209 04:35:40.307682 1614600 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 04:35:40.307871 1614600 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 04:35:40.307887 1614600 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 04:35:40.307900 1614600 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308079 1614600 command_runner.go:130] > # stream_tls_cert = ""
	I1209 04:35:40.308090 1614600 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 04:35:40.308097 1614600 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308297 1614600 command_runner.go:130] > # stream_tls_key = ""
	I1209 04:35:40.308313 1614600 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 04:35:40.308326 1614600 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 04:35:40.308345 1614600 command_runner.go:130] > # automatically pick up the changes.
	I1209 04:35:40.308572 1614600 command_runner.go:130] > # stream_tls_ca = ""
	I1209 04:35:40.308610 1614600 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.308814 1614600 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1209 04:35:40.308835 1614600 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.309085 1614600 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1209 04:35:40.309103 1614600 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 04:35:40.309115 1614600 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 04:35:40.309119 1614600 command_runner.go:130] > [crio.runtime]
	I1209 04:35:40.309126 1614600 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 04:35:40.309132 1614600 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 04:35:40.309143 1614600 command_runner.go:130] > # "nofile=1024:2048"
	I1209 04:35:40.309150 1614600 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 04:35:40.309302 1614600 command_runner.go:130] > # default_ulimits = [
	I1209 04:35:40.309485 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.309504 1614600 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 04:35:40.309688 1614600 command_runner.go:130] > # no_pivot = false
	I1209 04:35:40.309706 1614600 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 04:35:40.309713 1614600 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 04:35:40.310551 1614600 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 04:35:40.310598 1614600 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 04:35:40.310608 1614600 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 04:35:40.310618 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310767 1614600 command_runner.go:130] > # conmon = ""
	I1209 04:35:40.310786 1614600 command_runner.go:130] > # Cgroup setting for conmon
	I1209 04:35:40.310795 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 04:35:40.310806 1614600 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 04:35:40.310814 1614600 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 04:35:40.310835 1614600 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 04:35:40.310842 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310849 1614600 command_runner.go:130] > # conmon_env = [
	I1209 04:35:40.310857 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310866 1614600 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 04:35:40.310873 1614600 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 04:35:40.310879 1614600 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 04:35:40.310886 1614600 command_runner.go:130] > # default_env = [
	I1209 04:35:40.310889 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310895 1614600 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 04:35:40.310907 1614600 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 04:35:40.310914 1614600 command_runner.go:130] > # selinux = false
	I1209 04:35:40.310925 1614600 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 04:35:40.310933 1614600 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1209 04:35:40.310938 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310944 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.310954 1614600 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1209 04:35:40.310963 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310968 1614600 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1209 04:35:40.310974 1614600 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 04:35:40.310984 1614600 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 04:35:40.310991 1614600 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 04:35:40.311002 1614600 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 04:35:40.311007 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311011 1614600 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 04:35:40.311017 1614600 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 04:35:40.311022 1614600 command_runner.go:130] > # the cgroup blockio controller.
	I1209 04:35:40.311028 1614600 command_runner.go:130] > # blockio_config_file = ""
	I1209 04:35:40.311035 1614600 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 04:35:40.311042 1614600 command_runner.go:130] > # blockio parameters.
	I1209 04:35:40.311046 1614600 command_runner.go:130] > # blockio_reload = false
	I1209 04:35:40.311059 1614600 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 04:35:40.311064 1614600 command_runner.go:130] > # irqbalance daemon.
	I1209 04:35:40.311073 1614600 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 04:35:40.311083 1614600 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 04:35:40.311091 1614600 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 04:35:40.311107 1614600 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 04:35:40.311272 1614600 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 04:35:40.311287 1614600 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 04:35:40.311293 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311441 1614600 command_runner.go:130] > # rdt_config_file = ""
	I1209 04:35:40.311462 1614600 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 04:35:40.311467 1614600 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 04:35:40.311477 1614600 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 04:35:40.311487 1614600 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 04:35:40.311493 1614600 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 04:35:40.311505 1614600 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 04:35:40.311514 1614600 command_runner.go:130] > # will be added.
	I1209 04:35:40.311522 1614600 command_runner.go:130] > # default_capabilities = [
	I1209 04:35:40.311525 1614600 command_runner.go:130] > # 	"CHOWN",
	I1209 04:35:40.311531 1614600 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 04:35:40.311535 1614600 command_runner.go:130] > # 	"FSETID",
	I1209 04:35:40.311541 1614600 command_runner.go:130] > # 	"FOWNER",
	I1209 04:35:40.311545 1614600 command_runner.go:130] > # 	"SETGID",
	I1209 04:35:40.311548 1614600 command_runner.go:130] > # 	"SETUID",
	I1209 04:35:40.311573 1614600 command_runner.go:130] > # 	"SETPCAP",
	I1209 04:35:40.311581 1614600 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 04:35:40.311585 1614600 command_runner.go:130] > # 	"KILL",
	I1209 04:35:40.311752 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311769 1614600 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 04:35:40.311777 1614600 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 04:35:40.311784 1614600 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 04:35:40.311790 1614600 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 04:35:40.311796 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311802 1614600 command_runner.go:130] > default_sysctls = [
	I1209 04:35:40.311807 1614600 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 04:35:40.311811 1614600 command_runner.go:130] > ]
	I1209 04:35:40.311823 1614600 command_runner.go:130] > # List of devices on the host that a
	I1209 04:35:40.311829 1614600 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 04:35:40.311833 1614600 command_runner.go:130] > # allowed_devices = [
	I1209 04:35:40.311843 1614600 command_runner.go:130] > # 	"/dev/fuse",
	I1209 04:35:40.311847 1614600 command_runner.go:130] > # 	"/dev/net/tun",
	I1209 04:35:40.311851 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311856 1614600 command_runner.go:130] > # List of additional devices. specified as
	I1209 04:35:40.311863 1614600 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 04:35:40.311870 1614600 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 04:35:40.311876 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311883 1614600 command_runner.go:130] > # additional_devices = [
	I1209 04:35:40.311886 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311896 1614600 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 04:35:40.311900 1614600 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 04:35:40.311903 1614600 command_runner.go:130] > # 	"/etc/cdi",
	I1209 04:35:40.311908 1614600 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 04:35:40.311916 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311923 1614600 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 04:35:40.311929 1614600 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 04:35:40.311936 1614600 command_runner.go:130] > # Defaults to false.
	I1209 04:35:40.311942 1614600 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 04:35:40.311958 1614600 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 04:35:40.311969 1614600 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 04:35:40.311973 1614600 command_runner.go:130] > # hooks_dir = [
	I1209 04:35:40.311980 1614600 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 04:35:40.311986 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311992 1614600 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 04:35:40.312007 1614600 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 04:35:40.312013 1614600 command_runner.go:130] > # its default mounts from the following two files:
	I1209 04:35:40.312021 1614600 command_runner.go:130] > #
	I1209 04:35:40.312027 1614600 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 04:35:40.312034 1614600 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 04:35:40.312039 1614600 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 04:35:40.312045 1614600 command_runner.go:130] > #
	I1209 04:35:40.312051 1614600 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 04:35:40.312057 1614600 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 04:35:40.312065 1614600 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 04:35:40.312074 1614600 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 04:35:40.312077 1614600 command_runner.go:130] > #
	I1209 04:35:40.312081 1614600 command_runner.go:130] > # default_mounts_file = ""
	I1209 04:35:40.312087 1614600 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 04:35:40.312097 1614600 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 04:35:40.312102 1614600 command_runner.go:130] > # pids_limit = -1
	I1209 04:35:40.312108 1614600 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 04:35:40.312120 1614600 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 04:35:40.312128 1614600 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 04:35:40.312137 1614600 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 04:35:40.312275 1614600 command_runner.go:130] > # log_size_max = -1
	I1209 04:35:40.312297 1614600 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 04:35:40.312305 1614600 command_runner.go:130] > # log_to_journald = false
	I1209 04:35:40.312312 1614600 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 04:35:40.312322 1614600 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 04:35:40.312328 1614600 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 04:35:40.312333 1614600 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 04:35:40.312338 1614600 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 04:35:40.312345 1614600 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 04:35:40.312351 1614600 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 04:35:40.312355 1614600 command_runner.go:130] > # read_only = false
	I1209 04:35:40.312361 1614600 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 04:35:40.312373 1614600 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 04:35:40.312378 1614600 command_runner.go:130] > # live configuration reload.
	I1209 04:35:40.312551 1614600 command_runner.go:130] > # log_level = "info"
	I1209 04:35:40.312568 1614600 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 04:35:40.312574 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.312578 1614600 command_runner.go:130] > # log_filter = ""
	I1209 04:35:40.312588 1614600 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312594 1614600 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 04:35:40.312600 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312614 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312622 1614600 command_runner.go:130] > # uid_mappings = ""
	I1209 04:35:40.312629 1614600 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312635 1614600 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 04:35:40.312644 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312652 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312657 1614600 command_runner.go:130] > # gid_mappings = ""
	I1209 04:35:40.312663 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 04:35:40.312670 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312676 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312689 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312694 1614600 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 04:35:40.312706 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 04:35:40.312713 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312719 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312730 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312735 1614600 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 04:35:40.312745 1614600 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 04:35:40.312753 1614600 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 04:35:40.312759 1614600 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 04:35:40.312763 1614600 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 04:35:40.312771 1614600 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 04:35:40.312781 1614600 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 04:35:40.312787 1614600 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 04:35:40.312792 1614600 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 04:35:40.312800 1614600 command_runner.go:130] > # drop_infra_ctr = true
	I1209 04:35:40.312807 1614600 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 04:35:40.312813 1614600 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 04:35:40.312825 1614600 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 04:35:40.312831 1614600 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 04:35:40.312838 1614600 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 04:35:40.312846 1614600 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 04:35:40.312852 1614600 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 04:35:40.312863 1614600 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 04:35:40.312871 1614600 command_runner.go:130] > # shared_cpuset = ""
	I1209 04:35:40.312877 1614600 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 04:35:40.312882 1614600 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 04:35:40.312891 1614600 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 04:35:40.312899 1614600 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 04:35:40.312903 1614600 command_runner.go:130] > # pinns_path = ""
	I1209 04:35:40.312908 1614600 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 04:35:40.312919 1614600 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 04:35:40.312924 1614600 command_runner.go:130] > # enable_criu_support = true
	I1209 04:35:40.312929 1614600 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 04:35:40.312936 1614600 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 04:35:40.312940 1614600 command_runner.go:130] > # enable_pod_events = false
	I1209 04:35:40.312948 1614600 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 04:35:40.312957 1614600 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 04:35:40.312962 1614600 command_runner.go:130] > # default_runtime = "crun"
	I1209 04:35:40.312967 1614600 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 04:35:40.312984 1614600 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 04:35:40.312997 1614600 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 04:35:40.313003 1614600 command_runner.go:130] > # creation as a file is not desired either.
	I1209 04:35:40.313011 1614600 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 04:35:40.313018 1614600 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 04:35:40.313023 1614600 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 04:35:40.313241 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.313258 1614600 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 04:35:40.313265 1614600 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 04:35:40.313271 1614600 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 04:35:40.313279 1614600 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 04:35:40.313282 1614600 command_runner.go:130] > #
	I1209 04:35:40.313287 1614600 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 04:35:40.313298 1614600 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 04:35:40.313303 1614600 command_runner.go:130] > # runtime_type = "oci"
	I1209 04:35:40.313307 1614600 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 04:35:40.313320 1614600 command_runner.go:130] > # inherit_default_runtime = false
	I1209 04:35:40.313326 1614600 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 04:35:40.313335 1614600 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 04:35:40.313340 1614600 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 04:35:40.313344 1614600 command_runner.go:130] > # monitor_env = []
	I1209 04:35:40.313349 1614600 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 04:35:40.313353 1614600 command_runner.go:130] > # allowed_annotations = []
	I1209 04:35:40.313359 1614600 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 04:35:40.313365 1614600 command_runner.go:130] > # no_sync_log = false
	I1209 04:35:40.313369 1614600 command_runner.go:130] > # default_annotations = {}
	I1209 04:35:40.313373 1614600 command_runner.go:130] > # stream_websockets = false
	I1209 04:35:40.313377 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.313410 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.313420 1614600 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 04:35:40.313427 1614600 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 04:35:40.313440 1614600 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 04:35:40.313446 1614600 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 04:35:40.313450 1614600 command_runner.go:130] > #   in $PATH.
	I1209 04:35:40.313457 1614600 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 04:35:40.313465 1614600 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 04:35:40.313471 1614600 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 04:35:40.313477 1614600 command_runner.go:130] > #   state.
	I1209 04:35:40.313484 1614600 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 04:35:40.313498 1614600 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 04:35:40.313505 1614600 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1209 04:35:40.313515 1614600 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1209 04:35:40.313521 1614600 command_runner.go:130] > #   the values from the default runtime on load time.
	I1209 04:35:40.313528 1614600 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 04:35:40.313537 1614600 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 04:35:40.313543 1614600 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 04:35:40.313550 1614600 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 04:35:40.313558 1614600 command_runner.go:130] > #   The currently recognized values are:
	I1209 04:35:40.313565 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 04:35:40.313575 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 04:35:40.313584 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 04:35:40.313591 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 04:35:40.313599 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 04:35:40.313611 1614600 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 04:35:40.313618 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 04:35:40.313632 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 04:35:40.313638 1614600 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 04:35:40.313644 1614600 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1209 04:35:40.313651 1614600 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1209 04:35:40.313662 1614600 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1209 04:35:40.313668 1614600 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1209 04:35:40.313674 1614600 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1209 04:35:40.313684 1614600 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1209 04:35:40.313693 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1209 04:35:40.313703 1614600 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 04:35:40.313707 1614600 command_runner.go:130] > #   deprecated option "conmon".
	I1209 04:35:40.313715 1614600 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 04:35:40.313721 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 04:35:40.313730 1614600 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 04:35:40.313735 1614600 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 04:35:40.313742 1614600 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1209 04:35:40.313752 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 04:35:40.313763 1614600 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1209 04:35:40.313771 1614600 command_runner.go:130] > #   conmon-rs by using:
	I1209 04:35:40.313779 1614600 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1209 04:35:40.313788 1614600 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1209 04:35:40.313799 1614600 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1209 04:35:40.313806 1614600 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 04:35:40.313811 1614600 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 04:35:40.313818 1614600 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1209 04:35:40.313825 1614600 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1209 04:35:40.313830 1614600 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1209 04:35:40.313842 1614600 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1209 04:35:40.313852 1614600 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1209 04:35:40.313860 1614600 command_runner.go:130] > #   when a machine crash happens.
	I1209 04:35:40.313868 1614600 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1209 04:35:40.313881 1614600 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1209 04:35:40.313889 1614600 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1209 04:35:40.313894 1614600 command_runner.go:130] > #   seccomp profile for the runtime.
	I1209 04:35:40.313900 1614600 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1209 04:35:40.313911 1614600 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1209 04:35:40.313915 1614600 command_runner.go:130] > #
	I1209 04:35:40.313919 1614600 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 04:35:40.313927 1614600 command_runner.go:130] > #
	I1209 04:35:40.313934 1614600 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 04:35:40.313942 1614600 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 04:35:40.313949 1614600 command_runner.go:130] > #
	I1209 04:35:40.313955 1614600 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 04:35:40.313962 1614600 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 04:35:40.313965 1614600 command_runner.go:130] > #
	I1209 04:35:40.313971 1614600 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 04:35:40.313974 1614600 command_runner.go:130] > # feature.
	I1209 04:35:40.313977 1614600 command_runner.go:130] > #
	I1209 04:35:40.313983 1614600 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 04:35:40.313992 1614600 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 04:35:40.314004 1614600 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 04:35:40.314014 1614600 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 04:35:40.314021 1614600 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 04:35:40.314029 1614600 command_runner.go:130] > #
	I1209 04:35:40.314036 1614600 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 04:35:40.314042 1614600 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 04:35:40.314045 1614600 command_runner.go:130] > #
	I1209 04:35:40.314051 1614600 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 04:35:40.314057 1614600 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 04:35:40.314063 1614600 command_runner.go:130] > #
	I1209 04:35:40.314070 1614600 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 04:35:40.314076 1614600 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 04:35:40.314083 1614600 command_runner.go:130] > # limitation.
	I1209 04:35:40.314088 1614600 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1209 04:35:40.314093 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1209 04:35:40.314104 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314108 1614600 command_runner.go:130] > runtime_root = "/run/crun"
	I1209 04:35:40.314112 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314120 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314124 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314130 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314134 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314138 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314142 1614600 command_runner.go:130] > allowed_annotations = [
	I1209 04:35:40.314152 1614600 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1209 04:35:40.314155 1614600 command_runner.go:130] > ]
	I1209 04:35:40.314159 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314164 1614600 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 04:35:40.314172 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1209 04:35:40.314177 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314181 1614600 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 04:35:40.314191 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314195 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314203 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314208 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314211 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314215 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314219 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314440 1614600 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 04:35:40.314455 1614600 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 04:35:40.314461 1614600 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 04:35:40.314470 1614600 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 04:35:40.314481 1614600 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1209 04:35:40.314491 1614600 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1209 04:35:40.314503 1614600 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1209 04:35:40.314509 1614600 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 04:35:40.314523 1614600 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 04:35:40.314532 1614600 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 04:35:40.314548 1614600 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 04:35:40.314556 1614600 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 04:35:40.314560 1614600 command_runner.go:130] > # Example:
	I1209 04:35:40.314565 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 04:35:40.314584 1614600 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 04:35:40.314596 1614600 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 04:35:40.314602 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 04:35:40.314611 1614600 command_runner.go:130] > # cpuset = "0-1"
	I1209 04:35:40.314615 1614600 command_runner.go:130] > # cpushares = "5"
	I1209 04:35:40.314619 1614600 command_runner.go:130] > # cpuquota = "1000"
	I1209 04:35:40.314623 1614600 command_runner.go:130] > # cpuperiod = "100000"
	I1209 04:35:40.314627 1614600 command_runner.go:130] > # cpulimit = "35"
	I1209 04:35:40.314630 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.314634 1614600 command_runner.go:130] > # The workload name is workload-type.
	I1209 04:35:40.314642 1614600 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 04:35:40.314651 1614600 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 04:35:40.314657 1614600 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 04:35:40.314665 1614600 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 04:35:40.314675 1614600 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 04:35:40.314680 1614600 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 04:35:40.314688 1614600 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 04:35:40.314695 1614600 command_runner.go:130] > # Default value is set to true
	I1209 04:35:40.314700 1614600 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 04:35:40.314706 1614600 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 04:35:40.314710 1614600 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 04:35:40.314715 1614600 command_runner.go:130] > # Default value is set to 'false'
	I1209 04:35:40.314719 1614600 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 04:35:40.314731 1614600 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1209 04:35:40.314740 1614600 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1209 04:35:40.314747 1614600 command_runner.go:130] > # timezone = ""
	I1209 04:35:40.314754 1614600 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 04:35:40.314757 1614600 command_runner.go:130] > #
	I1209 04:35:40.314763 1614600 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 04:35:40.314777 1614600 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1209 04:35:40.314781 1614600 command_runner.go:130] > [crio.image]
	I1209 04:35:40.314787 1614600 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 04:35:40.314791 1614600 command_runner.go:130] > # default_transport = "docker://"
	I1209 04:35:40.314797 1614600 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 04:35:40.314810 1614600 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314814 1614600 command_runner.go:130] > # global_auth_file = ""
	I1209 04:35:40.314819 1614600 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 04:35:40.314829 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314834 1614600 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.314841 1614600 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 04:35:40.314852 1614600 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314858 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314863 1614600 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 04:35:40.314868 1614600 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 04:35:40.314875 1614600 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 04:35:40.314888 1614600 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 04:35:40.314904 1614600 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 04:35:40.314909 1614600 command_runner.go:130] > # pause_command = "/pause"
	I1209 04:35:40.314915 1614600 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 04:35:40.314924 1614600 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 04:35:40.314931 1614600 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 04:35:40.314942 1614600 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 04:35:40.314949 1614600 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 04:35:40.314955 1614600 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 04:35:40.314959 1614600 command_runner.go:130] > # pinned_images = [
	I1209 04:35:40.314961 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.314968 1614600 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 04:35:40.314978 1614600 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 04:35:40.314984 1614600 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 04:35:40.314995 1614600 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 04:35:40.315001 1614600 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 04:35:40.315011 1614600 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1209 04:35:40.315023 1614600 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 04:35:40.315031 1614600 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 04:35:40.315037 1614600 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 04:35:40.315049 1614600 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 04:35:40.315055 1614600 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 04:35:40.315065 1614600 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 04:35:40.315071 1614600 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 04:35:40.315078 1614600 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 04:35:40.315086 1614600 command_runner.go:130] > # changing them here.
	I1209 04:35:40.315091 1614600 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1209 04:35:40.315095 1614600 command_runner.go:130] > # insecure_registries = [
	I1209 04:35:40.315099 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315108 1614600 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 04:35:40.315114 1614600 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 04:35:40.315319 1614600 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 04:35:40.315344 1614600 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 04:35:40.315350 1614600 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 04:35:40.315355 1614600 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1209 04:35:40.315362 1614600 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1209 04:35:40.315367 1614600 command_runner.go:130] > # auto_reload_registries = false
	I1209 04:35:40.315372 1614600 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1209 04:35:40.315381 1614600 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1209 04:35:40.315390 1614600 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1209 04:35:40.315399 1614600 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1209 04:35:40.315404 1614600 command_runner.go:130] > # The mode of short name resolution.
	I1209 04:35:40.315411 1614600 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1209 04:35:40.315422 1614600 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1209 04:35:40.315430 1614600 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1209 04:35:40.315434 1614600 command_runner.go:130] > # short_name_mode = "enforcing"
	I1209 04:35:40.315440 1614600 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1209 04:35:40.315446 1614600 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1209 04:35:40.315450 1614600 command_runner.go:130] > # oci_artifact_mount_support = true
	I1209 04:35:40.315456 1614600 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 04:35:40.315460 1614600 command_runner.go:130] > # CNI plugins.
	I1209 04:35:40.315463 1614600 command_runner.go:130] > [crio.network]
	I1209 04:35:40.315469 1614600 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 04:35:40.315475 1614600 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 04:35:40.315482 1614600 command_runner.go:130] > # cni_default_network = ""
	I1209 04:35:40.315488 1614600 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 04:35:40.315493 1614600 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 04:35:40.315503 1614600 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 04:35:40.315507 1614600 command_runner.go:130] > # plugin_dirs = [
	I1209 04:35:40.315515 1614600 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 04:35:40.315519 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315526 1614600 command_runner.go:130] > # List of included pod metrics.
	I1209 04:35:40.315530 1614600 command_runner.go:130] > # included_pod_metrics = [
	I1209 04:35:40.315533 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315539 1614600 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 04:35:40.315542 1614600 command_runner.go:130] > [crio.metrics]
	I1209 04:35:40.315547 1614600 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 04:35:40.315552 1614600 command_runner.go:130] > # enable_metrics = false
	I1209 04:35:40.315562 1614600 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 04:35:40.315567 1614600 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 04:35:40.315573 1614600 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 04:35:40.315587 1614600 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 04:35:40.315593 1614600 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 04:35:40.315601 1614600 command_runner.go:130] > # metrics_collectors = [
	I1209 04:35:40.315605 1614600 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 04:35:40.315610 1614600 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 04:35:40.315614 1614600 command_runner.go:130] > # 	"containers_oom_total",
	I1209 04:35:40.315617 1614600 command_runner.go:130] > # 	"processes_defunct",
	I1209 04:35:40.315621 1614600 command_runner.go:130] > # 	"operations_total",
	I1209 04:35:40.315626 1614600 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 04:35:40.315630 1614600 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 04:35:40.315635 1614600 command_runner.go:130] > # 	"operations_errors_total",
	I1209 04:35:40.315638 1614600 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 04:35:40.315642 1614600 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 04:35:40.315646 1614600 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 04:35:40.315651 1614600 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 04:35:40.315661 1614600 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 04:35:40.315666 1614600 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 04:35:40.315675 1614600 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 04:35:40.315849 1614600 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 04:35:40.315864 1614600 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1209 04:35:40.315868 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315880 1614600 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1209 04:35:40.315884 1614600 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1209 04:35:40.315889 1614600 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 04:35:40.315893 1614600 command_runner.go:130] > # metrics_port = 9090
	I1209 04:35:40.315899 1614600 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 04:35:40.315907 1614600 command_runner.go:130] > # metrics_socket = ""
	I1209 04:35:40.315912 1614600 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 04:35:40.315921 1614600 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 04:35:40.315929 1614600 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 04:35:40.315937 1614600 command_runner.go:130] > # certificate on any modification event.
	I1209 04:35:40.315944 1614600 command_runner.go:130] > # metrics_cert = ""
	I1209 04:35:40.315953 1614600 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 04:35:40.315959 1614600 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 04:35:40.315968 1614600 command_runner.go:130] > # metrics_key = ""
	I1209 04:35:40.315974 1614600 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 04:35:40.315982 1614600 command_runner.go:130] > [crio.tracing]
	I1209 04:35:40.315987 1614600 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 04:35:40.315996 1614600 command_runner.go:130] > # enable_tracing = false
	I1209 04:35:40.316002 1614600 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 04:35:40.316009 1614600 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1209 04:35:40.316017 1614600 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 04:35:40.316027 1614600 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 04:35:40.316032 1614600 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 04:35:40.316035 1614600 command_runner.go:130] > [crio.nri]
	I1209 04:35:40.316040 1614600 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 04:35:40.316043 1614600 command_runner.go:130] > # enable_nri = true
	I1209 04:35:40.316047 1614600 command_runner.go:130] > # NRI socket to listen on.
	I1209 04:35:40.316051 1614600 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 04:35:40.316055 1614600 command_runner.go:130] > # NRI plugin directory to use.
	I1209 04:35:40.316064 1614600 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 04:35:40.316069 1614600 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 04:35:40.316077 1614600 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 04:35:40.316083 1614600 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 04:35:40.316147 1614600 command_runner.go:130] > # nri_disable_connections = false
	I1209 04:35:40.316157 1614600 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 04:35:40.316162 1614600 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 04:35:40.316185 1614600 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 04:35:40.316193 1614600 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 04:35:40.316198 1614600 command_runner.go:130] > # NRI default validator configuration.
	I1209 04:35:40.316205 1614600 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1209 04:35:40.316215 1614600 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1209 04:35:40.316220 1614600 command_runner.go:130] > # can be restricted/rejected:
	I1209 04:35:40.316224 1614600 command_runner.go:130] > # - OCI hook injection
	I1209 04:35:40.316233 1614600 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1209 04:35:40.316238 1614600 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1209 04:35:40.316243 1614600 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1209 04:35:40.316247 1614600 command_runner.go:130] > # - adjustment of linux namespaces
	I1209 04:35:40.316254 1614600 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1209 04:35:40.316264 1614600 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1209 04:35:40.316271 1614600 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1209 04:35:40.316277 1614600 command_runner.go:130] > #
	I1209 04:35:40.316282 1614600 command_runner.go:130] > # [crio.nri.default_validator]
	I1209 04:35:40.316290 1614600 command_runner.go:130] > # nri_enable_default_validator = false
	I1209 04:35:40.316295 1614600 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1209 04:35:40.316307 1614600 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1209 04:35:40.316317 1614600 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1209 04:35:40.316322 1614600 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1209 04:35:40.316327 1614600 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1209 04:35:40.316480 1614600 command_runner.go:130] > # nri_validator_required_plugins = [
	I1209 04:35:40.316508 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.316521 1614600 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1209 04:35:40.316528 1614600 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 04:35:40.316540 1614600 command_runner.go:130] > [crio.stats]
	I1209 04:35:40.316546 1614600 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 04:35:40.316551 1614600 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 04:35:40.316555 1614600 command_runner.go:130] > # stats_collection_period = 0
	I1209 04:35:40.316562 1614600 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1209 04:35:40.316572 1614600 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1209 04:35:40.316577 1614600 command_runner.go:130] > # collection_period = 0
	I1209 04:35:40.318311 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282255082Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1209 04:35:40.318330 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.2822971Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1209 04:35:40.318340 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282328904Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1209 04:35:40.318349 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282355243Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1209 04:35:40.318358 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282430665Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:40.318367 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282713695Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1209 04:35:40.318382 1614600 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 04:35:40.318459 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:40.318484 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:40.318506 1614600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:35:40.318532 1614600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:35:40.318689 1614600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:35:40.318765 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:35:40.328360 1614600 command_runner.go:130] > kubeadm
	I1209 04:35:40.328381 1614600 command_runner.go:130] > kubectl
	I1209 04:35:40.328387 1614600 command_runner.go:130] > kubelet
	I1209 04:35:40.329285 1614600 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:35:40.329353 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:35:40.336944 1614600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:35:40.349970 1614600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:35:40.362809 1614600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1209 04:35:40.375503 1614600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:35:40.379345 1614600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:35:40.379778 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:40.502305 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:41.326409 1614600 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:35:41.326563 1614600 certs.go:195] generating shared ca certs ...
	I1209 04:35:41.326611 1614600 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:41.326833 1614600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:35:41.326887 1614600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:35:41.326895 1614600 certs.go:257] generating profile certs ...
	I1209 04:35:41.327067 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:35:41.327129 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:35:41.327233 1614600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:35:41.327250 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:35:41.327267 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:35:41.327279 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:35:41.327290 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:35:41.327349 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:35:41.327367 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:35:41.327413 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:35:41.327427 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:35:41.327509 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:35:41.327593 1614600 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:35:41.327604 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:35:41.327677 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:35:41.327750 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:35:41.327813 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:35:41.327913 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:41.327983 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.328001 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.328047 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.328720 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:35:41.349998 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:35:41.370613 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:35:41.391438 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:35:41.410483 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:35:41.429428 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:35:41.449234 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:35:41.468289 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:35:41.486148 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:35:41.504497 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:35:41.523111 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:35:41.542281 1614600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:35:41.555566 1614600 ssh_runner.go:195] Run: openssl version
	I1209 04:35:41.561986 1614600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:35:41.562090 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.569846 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:35:41.577817 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581778 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581849 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581927 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.622889 1614600 command_runner.go:130] > 51391683
	I1209 04:35:41.623441 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:35:41.630995 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.638454 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:35:41.646110 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649703 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649815 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649886 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.690940 1614600 command_runner.go:130] > 3ec20f2e
	I1209 04:35:41.691023 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:35:41.698710 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.705943 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:35:41.713451 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717157 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717250 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717310 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.757537 1614600 command_runner.go:130] > b5213941
	I1209 04:35:41.757976 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:35:41.765482 1614600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769213 1614600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769237 1614600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:35:41.769244 1614600 command_runner.go:130] > Device: 259,1	Inode: 1322432     Links: 1
	I1209 04:35:41.769251 1614600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:41.769256 1614600 command_runner.go:130] > Access: 2025-12-09 04:31:33.728838377 +0000
	I1209 04:35:41.769262 1614600 command_runner.go:130] > Modify: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769267 1614600 command_runner.go:130] > Change: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769272 1614600 command_runner.go:130] >  Birth: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769363 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:35:41.810027 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.810619 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:35:41.851168 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.851713 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:35:41.892758 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.892839 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:35:41.938176 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.938689 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:35:41.979665 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.980184 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:35:42.021167 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:42.021686 1614600 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:42.021825 1614600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:35:42.021936 1614600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:35:42.052115 1614600 cri.go:89] found id: ""
	I1209 04:35:42.052191 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:35:42.060116 1614600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:35:42.060196 1614600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:35:42.060220 1614600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:35:42.061227 1614600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:35:42.061247 1614600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:35:42.061342 1614600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:35:42.070417 1614600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:35:42.071064 1614600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.071256 1614600 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "functional-331811" cluster setting kubeconfig missing "functional-331811" context setting]
	I1209 04:35:42.071646 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.072159 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.072417 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.073140 1614600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:35:42.073224 1614600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:35:42.073266 1614600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:35:42.073391 1614600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:35:42.073418 1614600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:35:42.073437 1614600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:35:42.073813 1614600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:35:42.085766 1614600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:35:42.085868 1614600 kubeadm.go:602] duration metric: took 24.612846ms to restartPrimaryControlPlane
	I1209 04:35:42.085898 1614600 kubeadm.go:403] duration metric: took 64.220222ms to StartCluster
	I1209 04:35:42.085947 1614600 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.086095 1614600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.086834 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.087380 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:42.087524 1614600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:35:42.087628 1614600 addons.go:70] Setting storage-provisioner=true in profile "functional-331811"
	I1209 04:35:42.087691 1614600 addons.go:239] Setting addon storage-provisioner=true in "functional-331811"
	I1209 04:35:42.087740 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.088325 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.087482 1614600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:35:42.089019 1614600 addons.go:70] Setting default-storageclass=true in profile "functional-331811"
	I1209 04:35:42.089039 1614600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-331811"
	I1209 04:35:42.089353 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.092155 1614600 out.go:179] * Verifying Kubernetes components...
	I1209 04:35:42.095248 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:42.128430 1614600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:35:42.131623 1614600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.131651 1614600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:35:42.131731 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.147694 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.147902 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.148207 1614600 addons.go:239] Setting addon default-storageclass=true in "functional-331811"
	I1209 04:35:42.148248 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.148712 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.182846 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.193184 1614600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:42.193209 1614600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:35:42.193289 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.220341 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.327312 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:42.346850 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.376931 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.076226 1614600 node_ready.go:35] waiting up to 6m0s for node "functional-331811" to be "Ready" ...
	I1209 04:35:43.076344 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.076396 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.076607 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076635 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076655 1614600 retry.go:31] will retry after 310.700454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076685 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076702 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076708 1614600 retry.go:31] will retry after 282.763546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076773 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.360393 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.387801 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.432930 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.433022 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.433059 1614600 retry.go:31] will retry after 489.220325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460835 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.460941 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460967 1614600 retry.go:31] will retry after 355.931225ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.577252 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.577329 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.577711 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.817107 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.911473 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.915604 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.915640 1614600 retry.go:31] will retry after 537.488813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.922787 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.976592 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.980371 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.980407 1614600 retry.go:31] will retry after 753.380628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.077073 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.453574 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:44.512034 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.512090 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.512116 1614600 retry.go:31] will retry after 707.625417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.577247 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.577656 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.734008 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:44.795873 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.795936 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.795960 1614600 retry.go:31] will retry after 1.127913267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.077396 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.077480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.077910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:45.077993 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:45.220540 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:45.296909 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.296951 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.296996 1614600 retry.go:31] will retry after 917.152391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.577366 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.577441 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:45.924157 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:45.995176 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.995217 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.995239 1614600 retry.go:31] will retry after 1.420775217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.077446 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.077526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.077798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:46.215234 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:46.279745 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:46.279823 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.279850 1614600 retry.go:31] will retry after 1.336322791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.577242 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.577341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.577688 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.077361 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.077723 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.416255 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:47.477013 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.480365 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.480397 1614600 retry.go:31] will retry after 2.174557655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:47.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:47.617100 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:47.681529 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.681577 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.681598 1614600 retry.go:31] will retry after 3.276200411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:48.077115 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.077203 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.077555 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:48.577382 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.577481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.577821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.076798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.576988 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.655381 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:49.715000 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:49.715035 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:49.715054 1614600 retry.go:31] will retry after 3.337758974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:50.077421 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.077518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.077847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:50.077903 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:50.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.576630 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.576967 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:50.958720 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:51.022646 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:51.022681 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.022700 1614600 retry.go:31] will retry after 4.624703928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.077048 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.077142 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.077474 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:51.577259 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.577334 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.577661 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.076656 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:52.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:53.053753 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:53.077246 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.077324 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.077594 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:53.113242 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:53.113284 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.113306 1614600 retry.go:31] will retry after 2.734988542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.576526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.576551 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.577004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:54.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:55.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.076500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.076811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.576596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.648391 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:55.705094 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.708789 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.708820 1614600 retry.go:31] will retry after 6.736330921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.849034 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:55.918734 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.918780 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.918800 1614600 retry.go:31] will retry after 8.152075725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:56.077153 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.077636 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:56.577352 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.577427 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:56.577743 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:57.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.077499 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.077829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:57.576552 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.576635 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:59.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.076667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:59.077089 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:59.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.576805 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.076586 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.577002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.076587 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:01.576991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:02.077159 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.077237 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.077605 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:02.446164 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:02.502744 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:02.506462 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.506498 1614600 retry.go:31] will retry after 8.388840508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.576683 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.576758 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.577095 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.076604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.076977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.576704 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.576784 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.577119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:03.577179 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:04.071900 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:04.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.076869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:04.150537 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:04.154620 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.154650 1614600 retry.go:31] will retry after 8.078270125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.577310 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.577452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.577816 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.076556 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.576672 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.576950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:06.076647 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:06.077129 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:06.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.076823 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.076900 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.077209 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.577024 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.577097 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.577441 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:08.077262 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:08.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:08.577265 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.577344 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.577616 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.077482 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.077835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.576413 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.576503 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.076504 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.076593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.076887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.576575 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.576673 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.576991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:10.577053 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:10.895548 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:10.953462 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:10.957148 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:10.957180 1614600 retry.go:31] will retry after 18.757746695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:11.076395 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.076478 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.076772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.576513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.576815 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.076936 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.077309 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.233682 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:12.292817 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:12.296392 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.296423 1614600 retry.go:31] will retry after 20.023788924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.576943 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.577019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.577364 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:12.577421 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:13.077108 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.077239 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.077603 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:13.577256 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.577343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.077313 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.077412 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.077731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.576496 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.576774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:15.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:15.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:15.576474 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.076431 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.076783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.576609 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.576956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:17.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.077082 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.077457 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:17.077514 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:17.577068 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.577144 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.577409 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.077285 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.077383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.077755 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.076597 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.576602 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:19.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:20.076579 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.076658 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.076980 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:20.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.076594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.576638 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.576994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:22.077314 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.077388 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:22.077714 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:22.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.076595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.076934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.576705 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.577060 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.076759 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.076839 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.077254 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.576837 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.576916 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.577306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:24.577364 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:25.077118 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.077190 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.077463 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:25.577272 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.077487 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.077842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.576440 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.576779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:27.076863 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.076944 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.077310 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:27.077367 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:27.577163 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.577241 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.577580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.077311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.077379 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.577399 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.577473 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.577808 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.076514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.576577 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.576646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:29.576955 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:29.715418 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:29.773517 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:29.777518 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:29.777549 1614600 retry.go:31] will retry after 13.466249075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:30.077059 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.077150 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.077512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:30.577014 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.577100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.577433 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.077181 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.077268 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.077521 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.577348 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.577801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:31.577857 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:32.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.076806 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.077154 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:32.320502 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:32.377593 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:32.381870 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.381909 1614600 retry.go:31] will retry after 28.435049856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.577214 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.577283 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.577547 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.077429 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.077516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.077823 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.576632 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.576978 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:34.076485 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.076922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:34.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:34.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.576951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.076511 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.076628 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.076979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.076491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.076926 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.576535 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.576977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:36.577035 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:37.076803 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.076875 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.077215 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:37.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.577125 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.577459 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.077495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.077876 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.576668 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.576989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:39.076692 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.077121 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:39.077180 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:39.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.576532 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.576612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.077052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.576937 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:41.576987 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:42.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.076556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:42.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.576610 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.077002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.244488 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:43.308556 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:43.308599 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.308622 1614600 retry.go:31] will retry after 20.568808948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.577020 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.577099 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.577399 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:43.577456 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:44.077183 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.077280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.077609 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:44.577311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.577390 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.577747 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:45.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.076692 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.077821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1209 04:36:45.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.576880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:46.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.076531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.076837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:46.076889 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:46.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.576859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.076876 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.076949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.077253 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.577001 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.577079 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.577339 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:48.077087 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.077173 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.077495 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:48.077544 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:48.577135 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.577218 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.577531 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.077177 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.077507 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.577363 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.577806 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.076584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.576621 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.577013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:50.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:51.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.077123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:51.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.076970 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.077045 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.077314 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.577272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.577623 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:52.577685 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:53.076390 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.076468 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:53.577353 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.577714 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.076421 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.076508 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:55.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:55.077081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:55.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.577383 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.577451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.577701 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:57.076714 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.076787 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.077117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:57.077170 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:57.576491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.076535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.076850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.076574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.576972 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:59.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:00.076760 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.076863 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.077187 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.576907 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.576998 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.577391 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.817971 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:00.880147 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:00.880206 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:00.880224 1614600 retry.go:31] will retry after 16.46927575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:01.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.076543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.076797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:01.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.576960 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:02.076888 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.076961 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.077278 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:02.077329 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:02.576827 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.576905 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.577242 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.076885 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.077203 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.576886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.878560 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:37:03.937026 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940694 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940802 1614600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:04.077117 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.077194 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.077475 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:04.077526 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:04.577262 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.577353 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.577683 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.076432 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.076859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.576819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.576622 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.576698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.577017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:06.577081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:07.077327 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.077411 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.077929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:07.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.576583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.076696 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.077190 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.576870 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.576949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.577244 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:08.577297 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:09.077127 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.077205 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:09.577337 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.577415 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.577756 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.076539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.076863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:11.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:11.077056 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.576515 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.076918 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.077297 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.577105 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.577178 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.577483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:13.077233 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.077301 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.077597 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:13.077653 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:13.577407 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.577483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.577834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.076503 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.076512 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.076989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:15.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:16.076438 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.076844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:16.576547 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.576641 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.576975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.076966 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.077042 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.077390 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.349771 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:17.409388 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413192 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413302 1614600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:17.416242 1614600 out.go:179] * Enabled addons: 
	I1209 04:37:17.419770 1614600 addons.go:530] duration metric: took 1m35.33224358s for enable addons: enabled=[]
	I1209 04:37:17.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.576800 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:18.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.076914 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:18.076974 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:18.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.076683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:20.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.076704 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.077078 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:20.077138 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:20.576447 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.576514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.076557 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.076645 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:22.076971 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.077046 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.077320 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:22.077371 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:22.577119 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.577200 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.577508 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.077228 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.077302 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.077678 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.577301 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.577385 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.577646 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:24.077387 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.077467 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.077801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:24.077859 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:24.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.076516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.076845 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.576541 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.576634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.576434 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.576510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.576842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:26.576894 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:27.077363 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.077772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:27.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.076461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.076533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.576561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:29.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:29.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:29.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.576604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.576748 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.577148 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:31.577206 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:32.077358 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.077437 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.077778 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:32.576461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.076565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.576689 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.577020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:34.076719 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.076790 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.077129 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:34.077191 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:34.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.576554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.077045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.576555 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.576651 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.076520 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:36.576893 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:37.076700 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:37.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.576566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.576841 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:39.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:39.076952 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:39.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.076451 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.576873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:41.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.076918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:41.076977 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:41.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.576507 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.576804 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.076573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.076649 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.077059 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.576872 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.577233 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:43.077479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.077558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.077870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:43.077918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:43.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.076698 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.076780 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.077140 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.576789 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.576864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.576773 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.576852 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.577196 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:45.577268 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:46.076955 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.077032 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.077330 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:46.577091 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.577484 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.077355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.077435 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.077777 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.576343 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.576413 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.576709 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:48.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.076924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:48.076985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:48.576690 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.576786 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.577139 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.076493 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.076866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.576550 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.076580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.576825 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:50.576876 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:51.076471 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.076547 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.076833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:51.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.077018 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.077092 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.077387 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.577182 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.577265 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.577610 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:52.577668 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:53.077401 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.077481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.077828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:53.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.576594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.076956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.576530 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:55.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.076862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:55.076908 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:55.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.576971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.076686 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.076765 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.077127 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.576603 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.576676 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:57.077077 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.077153 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.077489 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:57.077549 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:57.577277 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.577362 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.076355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.076431 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.076691 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.576837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.076605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.576429 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.576509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:59.576883 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:00.076590 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.076684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.076994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:00.576859 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.576953 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.577331 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.077097 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.077171 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.077483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.577361 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.577744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:01.577806 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:02.076658 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.077088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:02.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.576881 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.076607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.076969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.577088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:04.076785 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.076860 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.077186 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:04.077249 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:04.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.576888 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.076606 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.077018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.576519 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.576866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.076961 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:06.576985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:07.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.076824 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.077084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:07.576464 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.076916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.576683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:09.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:09.077009 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:09.576680 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.576755 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.577084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.076530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.076842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.576484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:11.076596 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.076680 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:11.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:11.576395 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.576732 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.076887 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.076960 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.077284 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.577054 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:13.077220 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.077295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:13.077607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:13.577421 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.577504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.076618 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.076974 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.576647 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.576716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.577024 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.076742 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.076823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.077205 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.577458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:15.577510 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:16.076942 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.077018 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.077298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:16.577080 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.577154 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.577499 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.077222 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.077307 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.077621 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.577360 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.577430 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:17.577730 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:18.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.076948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:18.576659 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.576737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.577070 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.076847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.576553 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:20.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:20.077015 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:20.577408 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.577485 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.577743 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.076529 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.076872 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.576583 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.077118 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.077384 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:22.077433 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:22.577298 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.577383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.577762 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.076896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.076595 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.077017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.576721 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.577172 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:24.577228 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:25.076985 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.077057 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.077316 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:25.577081 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.577159 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.577525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.077428 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.077536 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.077886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.576422 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.576498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.576744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:27.076724 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.076800 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.077105 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:27.077166 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:27.576841 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.576921 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.577195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.576540 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.576965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.076687 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.077094 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:29.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:30.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.076902 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:30.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.576577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.076559 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.076951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:32.077036 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.077110 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.077432 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:32.077494 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:32.577246 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.577331 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.076404 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.076504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.076853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.577018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.076840 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:34.576950 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:35.076629 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.076710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.077057 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:35.576738 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.577124 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.076862 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.076938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.077291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.577100 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.577187 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.577528 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:36.577591 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:37.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.076511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.076779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:37.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.076577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.076985 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:39.076497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.076900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:39.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:39.576601 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.076482 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.076567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.076858 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.076880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.576426 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.576818 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:41.576870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:42.077124 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.077219 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:42.577244 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.577337 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.577684 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.077329 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.077428 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.077706 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.577356 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.577731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:43.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:44.077436 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.077527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.078002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:44.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.076652 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.077429 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.577123 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.577460 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:46.077241 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.077667 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:46.077724 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:46.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.576518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.076726 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.076801 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.077144 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.576574 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.576648 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.076715 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.077126 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.576849 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.576930 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.577268 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:48.577334 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:49.077051 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.077394 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:49.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.577582 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.077370 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.077454 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.077810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.576424 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.576502 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.576796 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:51.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.076910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:51.076969 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:51.576623 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.576749 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.577040 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.077085 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.077160 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.077422 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.577216 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.577295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.577613 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:53.077392 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.077475 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.077797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:53.077856 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:53.576362 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.576448 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.576718 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.076568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.576614 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.576695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.577055 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.076818 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.077132 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.576528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.576901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:55.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:56.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.077039 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:56.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.576717 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.076676 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.076750 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.077090 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.576453 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.576855 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:58.076528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:58.076991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:58.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.077015 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.576391 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.576721 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.076886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:00.577017 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:01.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.077021 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.077100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.577124 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.577217 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.577512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:02.577562 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:03.077323 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.077407 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.077775 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:03.576388 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.576462 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.576801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.076927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:05.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.076614 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.076965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:05.077020 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:05.576441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.576512 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.076541 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.076627 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.076963 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.576692 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.576772 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.577111 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:07.076853 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.076924 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.077177 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:07.077219 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:07.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.576580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.576924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.576669 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.576753 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.577117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:09.577174 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:10.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.076856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:10.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.576584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.576962 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.076664 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.077066 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.576687 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.576941 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:12.077176 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.077252 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:12.077711 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:12.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.576516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.576897 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.076570 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.076642 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.576510 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.576831 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:14.576881 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:15.076545 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.076624 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:15.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.076538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.076835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:16.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:17.076773 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.076853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.077193 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:17.576588 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.576661 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.576992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.076899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.576718 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:18.577182 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:19.076436 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.076822 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:19.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.576983 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:21.076622 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.077074 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:21.077128 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:21.576821 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.576903 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.577234 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.077298 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.077380 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.076525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.576738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:23.576788 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:24.076805 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.076886 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.077219 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:24.577078 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.577155 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.077571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.078098 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.576598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:25.576994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:26.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.076622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:26.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.076774 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.076849 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.077183 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.577039 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.577116 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.577462 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:27.577520 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:28.077111 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.077189 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:28.577185 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.577261 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.577578 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.077440 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.077520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.077849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.576538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:30.076538 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.076629 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.076998 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:30.077061 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:30.576517 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.076955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.576521 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.576916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:32.076880 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.076954 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.077270 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:32.077326 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:32.577067 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.577413 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.077278 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.077360 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.077744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.076561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.576597 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.576678 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.577003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:34.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:35.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.076882 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:35.576445 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.576524 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:37.076840 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.076915 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.077171 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:37.077211 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:37.576860 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.576938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.577250 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.077017 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.077094 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.077417 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.577139 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.577221 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.577485 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:39.077239 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.077314 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.077657 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:39.077722 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:39.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.576520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.576913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.576856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:41.576911 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:42.077042 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.077525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:42.577190 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.577607 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.077049 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:43.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:44.076789 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.076865 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.077206 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:44.576988 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.577058 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.577402 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.077482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.077593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.078175 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.577162 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.577633 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:45.577692 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:46.077289 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.077367 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.077631 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:46.577385 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.577458 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.577783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.076819 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.076895 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.077306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.577090 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.577430 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:48.077209 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.077287 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.077634 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:48.077694 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:48.576414 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.576492 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.576820 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.076429 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.076812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.076634 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.077027 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:50.576904 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:51.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.076954 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:51.576645 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.576720 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.577036 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.077317 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.077391 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.077666 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.576786 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:53.076496 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:53.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:53.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.576834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.076579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.576494 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.576894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.076446 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.076520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.076829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.576458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.576864 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:55.576922 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:56.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.077075 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:56.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.576684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.576957 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.076915 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.076989 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.077329 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.577142 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.577223 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.577545 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:57.577606 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:58.077310 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.077382 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:58.576398 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.576810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.076569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.576814 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:00.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.076707 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.077051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:00.077102 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:00.576782 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.576892 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.577341 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.077110 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.077469 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.577344 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:02.077046 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.077464 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:02.077524 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:02.577200 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.577280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.577554 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.077335 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.077410 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.077751 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.576927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.076986 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.576717 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.577167 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:04.577233 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:05.077000 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.077083 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.077407 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:05.577162 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.577240 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.577561 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.077371 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.077455 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.077846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.576686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.577045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:07.076867 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.076956 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.077237 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:07.077285 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:07.577031 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.577112 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.077143 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.077231 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.077595 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.577327 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.577403 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.577658 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.076843 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.576572 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.576654 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.577008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:09.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:10.076510 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.076592 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.076913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:10.576495 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.577096 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:11.577137 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:12.077236 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.077311 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.077690 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:12.576433 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.076474 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.076548 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.076826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.576589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.576934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:14.076645 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.076722 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:14.077105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:14.576455 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.076587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.076968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.576689 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.576770 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.577097 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.076527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.076791 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.576559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:16.576962 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:17.076730 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.076809 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.077145 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:17.576557 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:19.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.076498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:19.076870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:19.576490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:21.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.076558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.076898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:21.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:21.576671 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.576745 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.577092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.077101 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.077458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.577307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.577395 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.577780 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.076488 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.076566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.076905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.576667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:23.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:24.076716 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.076812 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.077201 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:24.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.577098 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.577427 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.077197 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.077272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.577396 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.577807 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:25.577866 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:26.076551 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.076646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:26.576462 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.576534 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.076897 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.077258 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.577061 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.577148 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:28.077203 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.077282 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.077580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:28.077625 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:28.576412 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.576489 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.576712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.576846 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.577180 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:30.577234 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:31.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.076979 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.077238 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:31.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.577496 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.077307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.077384 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.077722 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.576539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:33.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.076563 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.076911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:33.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:33.576529 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.576968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.076674 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.077041 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.576964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:35.076695 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.077151 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:35.077212 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:35.576777 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.576847 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.577114 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.076516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.076591 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.076925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.576862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.076779 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.076855 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.077112 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:37.576915 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:38.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.076570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:38.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.576523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.576653 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.576731 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.577063 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:39.577116 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:40.076444 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.076518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.076828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:40.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.576874 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.076569 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.077011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.576534 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.576619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:42.077395 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.077483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.077909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:42.078001 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:42.576664 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.576741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.577081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.076642 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.076576 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.076879 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.576597 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:44.576957 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:45.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.076615 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.077092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:45.576710 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.576785 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.577104 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.076542 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.076809 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:47.076788 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.076864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.077245 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:47.077300 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:47.576416 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.576497 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.576797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.076992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.576737 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.076532 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.076827 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.576585 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:49.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:50.076712 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.076793 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.077113 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:50.576457 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.576530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.576900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.076686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.077038 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.576760 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:51.577220 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:52.077315 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.077402 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.077698 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:52.576467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.576558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.576921 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.076656 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.076733 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.576420 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.576495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.576776 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:54.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:54.077005 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:54.576713 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.576789 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.577077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.076751 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.076830 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.077119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:56.076633 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:56.077057 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:56.576546 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.576885 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.076984 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.077287 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.577072 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.577156 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.577468 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:58.077202 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.077274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.077543 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:58.077586 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:58.577422 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.577500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.577833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.076973 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.576658 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.576742 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.577051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.076674 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.576861 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.576941 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.577298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:00.577372 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:01.077099 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.077168 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.077505 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:01.577309 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.577392 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.076372 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.076451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.076749 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.576406 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.576484 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:03.076591 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.076792 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.077195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:03.077250 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:03.576825 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.576906 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.577274 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.076812 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.076893 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.577138 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.577214 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.577536 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:05.077263 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.077343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.077665 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:05.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:05.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.576451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.576771 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.076554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.576878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.076816 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.077173 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.576865 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:07.576918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:08.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.076616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.077003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:08.576544 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.076893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.576500 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.576908 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:09.576964 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:10.076483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.076557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.076873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:10.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.077082 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.576454 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.576850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:12.077093 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.077172 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.077480 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:12.077535 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:12.577297 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.577374 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.577704 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.076405 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.076480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.076737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.076691 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:14.576999 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:15.076684 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.076776 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.077081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:15.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.576853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.577200 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.076920 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.576710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.577052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:16.577105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:17.076817 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:17.576383 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.576453 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.576788 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.076603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.076964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:18.577127 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:19.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.076761 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:19.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.077004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.576633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:21.076661 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.077147 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:21.077209 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:21.576890 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.576967 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.577291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.077436 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.577198 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.577279 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.577606 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.076378 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.076452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.076785 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.576403 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.576491 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:23.576864 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:24.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:24.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.076975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.576506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.576863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:25.576919 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:26.076427 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.076505 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:26.576569 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.076922 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.076997 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.077305 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.577103 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.577175 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.577550 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:27.577607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:28.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.077414 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.077671 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:28.577417 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.577513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.577846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.576905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:30.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.076966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:30.077050 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:30.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.576966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.076669 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.076744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.576502 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.576918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:32.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.077068 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.077435 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:32.077497 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:32.577199 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.577274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.577539 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.077339 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.077443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.077811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.576930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.076861 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.576657 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.577014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:34.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:35.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.076546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.076895 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:35.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.576553 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.577032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:37.076948 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.077019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.077352 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:37.077398 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:37.577132 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.577216 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.577592 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.077367 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.077444 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.077774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.576549 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.076517 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.076596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.576754 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.576834 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.577168 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:39.577222 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:40.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.076703 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.076991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:40.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.576891 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.577374 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.577738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:41.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:42.076410 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.076517 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.076959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:42.576665 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:43.076660 1614600 node_ready.go:38] duration metric: took 6m0.000391304s for node "functional-331811" to be "Ready" ...
	I1209 04:41:43.080060 1614600 out.go:203] 
	W1209 04:41:43.083006 1614600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:41:43.083030 1614600 out.go:285] * 
	W1209 04:41:43.085173 1614600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:41:43.088614 1614600 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991577379Z" level=info msg="Using the internal default seccomp profile"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991590368Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991596276Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991608206Z" level=info msg="RDT not available in the host system"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991629581Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.992592059Z" level=info msg="Conmon does support the --sync option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.992622862Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.992647149Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.993492047Z" level=info msg="Conmon does support the --sync option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.993519452Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.993729366Z" level=info msg="Updated default CNI network name to "
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.994436293Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.995012996Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.995082403Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059745321Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059782253Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059831984Z" level=info msg="Create NRI interface"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059948186Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059957836Z" level=info msg="runtime interface created"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059972769Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.05997962Z" level=info msg="runtime interface starting up..."
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.05998607Z" level=info msg="starting plugins..."
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059999329Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.060074998Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:35:40 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:41:45.285664    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:45.286362    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:45.288117    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:45.288763    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:45.290378    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:41:45 up  9:24,  0 user,  load average: 0.18, 0.29, 0.75
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:41:43 functional-331811 kubelet[8538]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:43 functional-331811 kubelet[8538]: E1209 04:41:43.188017    8538 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:43 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:43 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:43 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 09 04:41:43 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:43 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:43 functional-331811 kubelet[8544]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:43 functional-331811 kubelet[8544]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:43 functional-331811 kubelet[8544]: E1209 04:41:43.915465    8544 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:43 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:43 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:44 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 09 04:41:44 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:44 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:44 functional-331811 kubelet[8565]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:44 functional-331811 kubelet[8565]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:44 functional-331811 kubelet[8565]: E1209 04:41:44.651551    8565 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:44 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:44 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:45 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 09 04:41:45 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:45 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:45 functional-331811 kubelet[8657]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:45 functional-331811 kubelet[8657]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (363.734178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-331811 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-331811 get po -A: exit status 1 (63.082198ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-331811 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-331811 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-331811 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (310.03898ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 logs -n 25: (1.065778499s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-790468 ssh sudo cat /usr/share/ca-certificates/1580521.pem                                                                                     │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/15805212.pem                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image save kicbase/echo-server:functional-790468 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /usr/share/ca-certificates/15805212.pem                                                                                    │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image rm kicbase/echo-server:functional-790468 --alsologtostderr                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image save --daemon kicbase/echo-server:functional-790468 --alsologtostderr                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ update-context │ functional-790468 update-context --alsologtostderr -v=2                                                                                                   │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format yaml --alsologtostderr                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format short --alsologtostderr                                                                                               │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh            │ functional-790468 ssh pgrep buildkitd                                                                                                                     │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image          │ functional-790468 image ls --format json --alsologtostderr                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls --format table --alsologtostderr                                                                                               │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                                    │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image          │ functional-790468 image ls                                                                                                                                │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete         │ -p functional-790468                                                                                                                                      │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start          │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ start          │ -p functional-331811 --alsologtostderr -v=8                                                                                                               │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:35:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:35:36.923741 1614600 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:35:36.923916 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.923926 1614600 out.go:374] Setting ErrFile to fd 2...
	I1209 04:35:36.923933 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.924200 1614600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:35:36.924580 1614600 out.go:368] Setting JSON to false
	I1209 04:35:36.925424 1614600 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33477,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:35:36.925503 1614600 start.go:143] virtualization:  
	I1209 04:35:36.929063 1614600 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:35:36.932800 1614600 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:35:36.932938 1614600 notify.go:221] Checking for updates...
	I1209 04:35:36.938644 1614600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:35:36.941493 1614600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:36.944366 1614600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:35:36.947167 1614600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:35:36.949981 1614600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:35:36.953271 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:36.953380 1614600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:35:36.980248 1614600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:35:36.980355 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.042703 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.032815271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.042820 1614600 docker.go:319] overlay module found
	I1209 04:35:37.045833 1614600 out.go:179] * Using the docker driver based on existing profile
	I1209 04:35:37.048621 1614600 start.go:309] selected driver: docker
	I1209 04:35:37.048647 1614600 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.048735 1614600 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:35:37.048847 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.101945 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.092778249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.102371 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:37.102446 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:37.102494 1614600 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.105799 1614600 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:35:37.108781 1614600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:35:37.111778 1614600 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:35:37.114815 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:37.114886 1614600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:35:37.114901 1614600 cache.go:65] Caching tarball of preloaded images
	I1209 04:35:37.114901 1614600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:35:37.114988 1614600 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:35:37.114998 1614600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:35:37.115114 1614600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:35:37.133782 1614600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:35:37.133805 1614600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:35:37.133825 1614600 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:35:37.133858 1614600 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:35:37.133920 1614600 start.go:364] duration metric: took 38.638µs to acquireMachinesLock for "functional-331811"
	I1209 04:35:37.133944 1614600 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:35:37.133953 1614600 fix.go:54] fixHost starting: 
	I1209 04:35:37.134223 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:37.151389 1614600 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:35:37.151428 1614600 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:35:37.154776 1614600 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:35:37.154815 1614600 machine.go:94] provisionDockerMachine start ...
	I1209 04:35:37.154907 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.171646 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.171972 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.171985 1614600 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:35:37.327745 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.327810 1614600 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:35:37.327896 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.347228 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.347562 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.347574 1614600 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:35:37.512164 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.512262 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.529769 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.530100 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.530124 1614600 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:35:37.682808 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:35:37.682838 1614600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:35:37.682870 1614600 ubuntu.go:190] setting up certificates
	I1209 04:35:37.682895 1614600 provision.go:84] configureAuth start
	I1209 04:35:37.682958 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:37.700930 1614600 provision.go:143] copyHostCerts
	I1209 04:35:37.700976 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701008 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:35:37.701021 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701094 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:35:37.701192 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701215 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:35:37.701230 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701259 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:35:37.701304 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701324 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:35:37.701331 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701357 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:35:37.701411 1614600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:35:37.907915 1614600 provision.go:177] copyRemoteCerts
	I1209 04:35:37.907981 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:35:37.908038 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.925118 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.031668 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:35:38.031745 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:35:38.051846 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:35:38.051953 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:35:38.075178 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:35:38.075249 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:35:38.102039 1614600 provision.go:87] duration metric: took 419.115897ms to configureAuth
	I1209 04:35:38.102117 1614600 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:35:38.102384 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:38.102539 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.125059 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:38.125376 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:38.125391 1614600 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:35:38.471803 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:35:38.471824 1614600 machine.go:97] duration metric: took 1.317001735s to provisionDockerMachine
	I1209 04:35:38.471836 1614600 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:35:38.471848 1614600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:35:38.471925 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:35:38.471961 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.490918 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.598660 1614600 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:35:38.602109 1614600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:35:38.602129 1614600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:35:38.602133 1614600 command_runner.go:130] > VERSION_ID="12"
	I1209 04:35:38.602137 1614600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:35:38.602143 1614600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:35:38.602146 1614600 command_runner.go:130] > ID=debian
	I1209 04:35:38.602151 1614600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:35:38.602156 1614600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:35:38.602162 1614600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:35:38.602263 1614600 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:35:38.602312 1614600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:35:38.602329 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:35:38.602392 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:35:38.602478 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:35:38.602488 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:35:38.602561 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:35:38.602585 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> /etc/test/nested/copy/1580521/hosts
	I1209 04:35:38.602639 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:35:38.610143 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:38.627602 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:35:38.644510 1614600 start.go:296] duration metric: took 172.65884ms for postStartSetup
	I1209 04:35:38.644590 1614600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:35:38.644638 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.661666 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.763521 1614600 command_runner.go:130] > 14%
	I1209 04:35:38.763600 1614600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:35:38.767910 1614600 command_runner.go:130] > 169G
	I1209 04:35:38.768419 1614600 fix.go:56] duration metric: took 1.634462107s for fixHost
	I1209 04:35:38.768442 1614600 start.go:83] releasing machines lock for "functional-331811", held for 1.634508761s
	I1209 04:35:38.768510 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:38.785686 1614600 ssh_runner.go:195] Run: cat /version.json
	I1209 04:35:38.785708 1614600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:35:38.785735 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.785760 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.812264 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.824669 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.938034 1614600 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:35:38.938167 1614600 ssh_runner.go:195] Run: systemctl --version
	I1209 04:35:39.026186 1614600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:35:39.029038 1614600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:35:39.029075 1614600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:35:39.029143 1614600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:35:39.066886 1614600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:35:39.071437 1614600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:35:39.071476 1614600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:35:39.071539 1614600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:35:39.079896 1614600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:35:39.079922 1614600 start.go:496] detecting cgroup driver to use...
	I1209 04:35:39.079956 1614600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:35:39.080020 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:35:39.095690 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:35:39.109020 1614600 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:35:39.109092 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:35:39.124696 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:35:39.138081 1614600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:35:39.247127 1614600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:35:39.364113 1614600 docker.go:234] disabling docker service ...
	I1209 04:35:39.364202 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:35:39.381227 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:35:39.394458 1614600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:35:39.513409 1614600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:35:39.656760 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:35:39.669700 1614600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:35:39.682849 1614600 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 04:35:39.684261 1614600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:35:39.684369 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.693327 1614600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:35:39.693420 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.702710 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.711893 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.720974 1614600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:35:39.729134 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.738010 1614600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.746818 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.757592 1614600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:35:39.764510 1614600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:35:39.765518 1614600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:35:39.773280 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:39.885186 1614600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:35:40.065444 1614600 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:35:40.065521 1614600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:35:40.069680 1614600 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 04:35:40.069719 1614600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:35:40.069751 1614600 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1209 04:35:40.069764 1614600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:40.069773 1614600 command_runner.go:130] > Access: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069780 1614600 command_runner.go:130] > Modify: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069788 1614600 command_runner.go:130] > Change: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069792 1614600 command_runner.go:130] >  Birth: -
	I1209 04:35:40.069850 1614600 start.go:564] Will wait 60s for crictl version
	I1209 04:35:40.069925 1614600 ssh_runner.go:195] Run: which crictl
	I1209 04:35:40.073554 1614600 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:35:40.073791 1614600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:35:40.095945 1614600 command_runner.go:130] > Version:  0.1.0
	I1209 04:35:40.096030 1614600 command_runner.go:130] > RuntimeName:  cri-o
	I1209 04:35:40.096051 1614600 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1209 04:35:40.096074 1614600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:35:40.098378 1614600 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:35:40.098514 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.127067 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.127092 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.127099 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.127105 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.127110 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.127137 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.127156 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.127168 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.127172 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.127180 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.127185 1614600 command_runner.go:130] >      static
	I1209 04:35:40.127194 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.127198 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.127227 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.127238 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.127242 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.127250 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.127255 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.127262 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.127267 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.129252 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.157358 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.157406 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.157412 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.157417 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.157423 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.157427 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.157432 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.157472 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.157484 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.157489 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.157492 1614600 command_runner.go:130] >      static
	I1209 04:35:40.157496 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.157508 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.157512 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.157516 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.157547 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.157557 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.157562 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.157567 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.157573 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.164627 1614600 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:35:40.167496 1614600 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:35:40.183934 1614600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:35:40.187985 1614600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:35:40.188113 1614600 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:35:40.188232 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:40.188297 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.225616 1614600 command_runner.go:130] > {
	I1209 04:35:40.225636 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.225641 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225650 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.225655 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225670 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.225673 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225678 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225687 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.225695 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.225699 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225704 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.225711 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225716 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225719 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225723 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225729 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.225733 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225738 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.225742 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225751 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225760 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.225769 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.225773 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225777 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.225781 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225789 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225792 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225795 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225802 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.225806 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225811 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.225814 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225818 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225826 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.225835 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.225838 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225842 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.225847 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.225851 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225854 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225857 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225864 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.225868 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225872 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.225881 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225885 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225897 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.225905 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.225909 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225913 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.225916 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225920 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225923 1614600 command_runner.go:130] >       },
	I1209 04:35:40.225931 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225936 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225939 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225942 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225949 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.225953 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225958 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.225961 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225965 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225973 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.225981 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.225983 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225987 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.225991 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225995 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225998 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226001 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226005 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226008 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226011 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226018 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.226021 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226027 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.226030 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226037 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226045 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.226054 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.226057 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226062 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.226065 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226069 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226072 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226076 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226080 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226082 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226085 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226092 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.226096 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226101 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.226104 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226108 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226115 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.226123 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.226126 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226130 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.226133 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226137 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226140 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226143 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226149 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.226153 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226159 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.226162 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226166 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226174 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.226196 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.226200 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226207 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.226210 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226214 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226218 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226222 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226226 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226228 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226232 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226238 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.226242 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226246 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.226249 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226253 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226261 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.226269 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.226273 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226277 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.226280 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226284 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.226288 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226294 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226297 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.226301 1614600 command_runner.go:130] >     }
	I1209 04:35:40.226303 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.226307 1614600 command_runner.go:130] > }
	I1209 04:35:40.228010 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.228035 1614600 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:35:40.228091 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.253311 1614600 command_runner.go:130] > {
	I1209 04:35:40.253331 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.253335 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253349 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.253353 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253360 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.253363 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253367 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253375 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.253383 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.253386 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253391 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.253395 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253400 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253403 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253406 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253412 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.253416 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253421 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.253425 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253429 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253437 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.253445 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.253449 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253453 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.253457 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253463 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253466 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253469 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253476 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.253480 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253485 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.253489 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253492 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253500 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.253508 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.253515 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253519 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.253523 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.253528 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253531 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253534 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253540 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.253544 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253549 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.253553 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253557 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253564 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.253571 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.253574 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253578 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.253581 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253585 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253592 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253600 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253604 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253607 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253611 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253617 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.253621 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253626 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.253629 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253633 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253641 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.253649 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.253651 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253655 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.253659 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253662 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253669 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253672 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253676 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253679 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253682 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253688 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.253691 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253698 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.253701 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253704 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253713 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.253721 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.253724 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253728 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.253731 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253735 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253738 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253742 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253745 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253748 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253751 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253758 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.253762 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253767 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.253770 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253773 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253781 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.253789 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.253792 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253795 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.253799 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253803 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253806 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253812 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253819 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.253823 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253828 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.253831 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253835 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253843 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.253860 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.253863 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253867 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.253870 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253874 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253877 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253881 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253884 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253887 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253890 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253896 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.253900 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253905 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.253908 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253912 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253919 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.253926 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.253929 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253934 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.253937 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253941 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.253944 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253948 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253952 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.253955 1614600 command_runner.go:130] >     }
	I1209 04:35:40.253958 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.253965 1614600 command_runner.go:130] > }
	I1209 04:35:40.254095 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.254103 1614600 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:35:40.254110 1614600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:35:40.254208 1614600 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:35:40.254292 1614600 ssh_runner.go:195] Run: crio config
	I1209 04:35:40.303771 1614600 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 04:35:40.303802 1614600 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 04:35:40.303810 1614600 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 04:35:40.303813 1614600 command_runner.go:130] > #
	I1209 04:35:40.303821 1614600 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 04:35:40.303827 1614600 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 04:35:40.303834 1614600 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 04:35:40.303844 1614600 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 04:35:40.303848 1614600 command_runner.go:130] > # reload'.
	I1209 04:35:40.303854 1614600 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 04:35:40.303865 1614600 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 04:35:40.303872 1614600 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 04:35:40.303882 1614600 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 04:35:40.303886 1614600 command_runner.go:130] > [crio]
	I1209 04:35:40.303892 1614600 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 04:35:40.303900 1614600 command_runner.go:130] > # containers images, in this directory.
	I1209 04:35:40.304039 1614600 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1209 04:35:40.304055 1614600 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 04:35:40.304161 1614600 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1209 04:35:40.304178 1614600 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 04:35:40.304429 1614600 command_runner.go:130] > # imagestore = ""
	I1209 04:35:40.304453 1614600 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 04:35:40.304461 1614600 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 04:35:40.304691 1614600 command_runner.go:130] > # storage_driver = "overlay"
	I1209 04:35:40.304703 1614600 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 04:35:40.304710 1614600 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 04:35:40.304804 1614600 command_runner.go:130] > # storage_option = [
	I1209 04:35:40.305009 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.305024 1614600 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 04:35:40.305032 1614600 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 04:35:40.305284 1614600 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 04:35:40.305301 1614600 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 04:35:40.305327 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 04:35:40.305337 1614600 command_runner.go:130] > # always happen on a node reboot
	I1209 04:35:40.305502 1614600 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 04:35:40.305532 1614600 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 04:35:40.305540 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 04:35:40.305547 1614600 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 04:35:40.305748 1614600 command_runner.go:130] > # version_file_persist = ""
	I1209 04:35:40.305764 1614600 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 04:35:40.305775 1614600 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 04:35:40.306057 1614600 command_runner.go:130] > # internal_wipe = true
	I1209 04:35:40.306082 1614600 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 04:35:40.306090 1614600 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 04:35:40.306271 1614600 command_runner.go:130] > # internal_repair = true
	I1209 04:35:40.306293 1614600 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 04:35:40.306300 1614600 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 04:35:40.306308 1614600 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 04:35:40.306632 1614600 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 04:35:40.306647 1614600 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 04:35:40.306651 1614600 command_runner.go:130] > [crio.api]
	I1209 04:35:40.306663 1614600 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 04:35:40.306916 1614600 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 04:35:40.306934 1614600 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 04:35:40.307148 1614600 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 04:35:40.307163 1614600 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 04:35:40.307169 1614600 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 04:35:40.307396 1614600 command_runner.go:130] > # stream_port = "0"
	I1209 04:35:40.307416 1614600 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 04:35:40.307661 1614600 command_runner.go:130] > # stream_enable_tls = false
	I1209 04:35:40.307682 1614600 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 04:35:40.307871 1614600 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 04:35:40.307887 1614600 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 04:35:40.307900 1614600 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308079 1614600 command_runner.go:130] > # stream_tls_cert = ""
	I1209 04:35:40.308090 1614600 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 04:35:40.308097 1614600 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308297 1614600 command_runner.go:130] > # stream_tls_key = ""
	I1209 04:35:40.308313 1614600 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 04:35:40.308326 1614600 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 04:35:40.308345 1614600 command_runner.go:130] > # automatically pick up the changes.
	I1209 04:35:40.308572 1614600 command_runner.go:130] > # stream_tls_ca = ""
	I1209 04:35:40.308610 1614600 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.308814 1614600 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1209 04:35:40.308835 1614600 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.309085 1614600 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1209 04:35:40.309103 1614600 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 04:35:40.309115 1614600 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 04:35:40.309119 1614600 command_runner.go:130] > [crio.runtime]
	I1209 04:35:40.309126 1614600 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 04:35:40.309132 1614600 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 04:35:40.309143 1614600 command_runner.go:130] > # "nofile=1024:2048"
	I1209 04:35:40.309150 1614600 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 04:35:40.309302 1614600 command_runner.go:130] > # default_ulimits = [
	I1209 04:35:40.309485 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.309504 1614600 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 04:35:40.309688 1614600 command_runner.go:130] > # no_pivot = false
	I1209 04:35:40.309706 1614600 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 04:35:40.309713 1614600 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 04:35:40.310551 1614600 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 04:35:40.310598 1614600 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 04:35:40.310608 1614600 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 04:35:40.310618 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310767 1614600 command_runner.go:130] > # conmon = ""
	I1209 04:35:40.310786 1614600 command_runner.go:130] > # Cgroup setting for conmon
	I1209 04:35:40.310795 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 04:35:40.310806 1614600 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 04:35:40.310814 1614600 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 04:35:40.310835 1614600 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 04:35:40.310842 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310849 1614600 command_runner.go:130] > # conmon_env = [
	I1209 04:35:40.310857 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310866 1614600 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 04:35:40.310873 1614600 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 04:35:40.310879 1614600 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 04:35:40.310886 1614600 command_runner.go:130] > # default_env = [
	I1209 04:35:40.310889 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310895 1614600 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 04:35:40.310907 1614600 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 04:35:40.310914 1614600 command_runner.go:130] > # selinux = false
	I1209 04:35:40.310925 1614600 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 04:35:40.310933 1614600 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1209 04:35:40.310938 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310944 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.310954 1614600 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1209 04:35:40.310963 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310968 1614600 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1209 04:35:40.310974 1614600 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 04:35:40.310984 1614600 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 04:35:40.310991 1614600 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 04:35:40.311002 1614600 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 04:35:40.311007 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311011 1614600 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 04:35:40.311017 1614600 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 04:35:40.311022 1614600 command_runner.go:130] > # the cgroup blockio controller.
	I1209 04:35:40.311028 1614600 command_runner.go:130] > # blockio_config_file = ""
	I1209 04:35:40.311035 1614600 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 04:35:40.311042 1614600 command_runner.go:130] > # blockio parameters.
	I1209 04:35:40.311046 1614600 command_runner.go:130] > # blockio_reload = false
	I1209 04:35:40.311059 1614600 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 04:35:40.311064 1614600 command_runner.go:130] > # irqbalance daemon.
	I1209 04:35:40.311073 1614600 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 04:35:40.311083 1614600 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 04:35:40.311091 1614600 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 04:35:40.311107 1614600 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 04:35:40.311272 1614600 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 04:35:40.311287 1614600 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 04:35:40.311293 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311441 1614600 command_runner.go:130] > # rdt_config_file = ""
	I1209 04:35:40.311462 1614600 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 04:35:40.311467 1614600 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 04:35:40.311477 1614600 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 04:35:40.311487 1614600 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 04:35:40.311493 1614600 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 04:35:40.311505 1614600 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 04:35:40.311514 1614600 command_runner.go:130] > # will be added.
	I1209 04:35:40.311522 1614600 command_runner.go:130] > # default_capabilities = [
	I1209 04:35:40.311525 1614600 command_runner.go:130] > # 	"CHOWN",
	I1209 04:35:40.311531 1614600 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 04:35:40.311535 1614600 command_runner.go:130] > # 	"FSETID",
	I1209 04:35:40.311541 1614600 command_runner.go:130] > # 	"FOWNER",
	I1209 04:35:40.311545 1614600 command_runner.go:130] > # 	"SETGID",
	I1209 04:35:40.311548 1614600 command_runner.go:130] > # 	"SETUID",
	I1209 04:35:40.311573 1614600 command_runner.go:130] > # 	"SETPCAP",
	I1209 04:35:40.311581 1614600 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 04:35:40.311585 1614600 command_runner.go:130] > # 	"KILL",
	I1209 04:35:40.311752 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311769 1614600 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 04:35:40.311777 1614600 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 04:35:40.311784 1614600 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 04:35:40.311790 1614600 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 04:35:40.311796 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311802 1614600 command_runner.go:130] > default_sysctls = [
	I1209 04:35:40.311807 1614600 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 04:35:40.311811 1614600 command_runner.go:130] > ]
	I1209 04:35:40.311823 1614600 command_runner.go:130] > # List of devices on the host that a
	I1209 04:35:40.311829 1614600 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 04:35:40.311833 1614600 command_runner.go:130] > # allowed_devices = [
	I1209 04:35:40.311843 1614600 command_runner.go:130] > # 	"/dev/fuse",
	I1209 04:35:40.311847 1614600 command_runner.go:130] > # 	"/dev/net/tun",
	I1209 04:35:40.311851 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311856 1614600 command_runner.go:130] > # List of additional devices. specified as
	I1209 04:35:40.311863 1614600 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 04:35:40.311870 1614600 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 04:35:40.311876 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311883 1614600 command_runner.go:130] > # additional_devices = [
	I1209 04:35:40.311886 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311896 1614600 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 04:35:40.311900 1614600 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 04:35:40.311903 1614600 command_runner.go:130] > # 	"/etc/cdi",
	I1209 04:35:40.311908 1614600 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 04:35:40.311916 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311923 1614600 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 04:35:40.311929 1614600 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 04:35:40.311936 1614600 command_runner.go:130] > # Defaults to false.
	I1209 04:35:40.311942 1614600 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 04:35:40.311958 1614600 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 04:35:40.311969 1614600 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 04:35:40.311973 1614600 command_runner.go:130] > # hooks_dir = [
	I1209 04:35:40.311980 1614600 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 04:35:40.311986 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311992 1614600 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 04:35:40.312007 1614600 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 04:35:40.312013 1614600 command_runner.go:130] > # its default mounts from the following two files:
	I1209 04:35:40.312021 1614600 command_runner.go:130] > #
	I1209 04:35:40.312027 1614600 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 04:35:40.312034 1614600 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 04:35:40.312039 1614600 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 04:35:40.312045 1614600 command_runner.go:130] > #
	I1209 04:35:40.312051 1614600 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 04:35:40.312057 1614600 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 04:35:40.312065 1614600 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 04:35:40.312074 1614600 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 04:35:40.312077 1614600 command_runner.go:130] > #
	I1209 04:35:40.312081 1614600 command_runner.go:130] > # default_mounts_file = ""
	I1209 04:35:40.312087 1614600 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 04:35:40.312097 1614600 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 04:35:40.312102 1614600 command_runner.go:130] > # pids_limit = -1
	I1209 04:35:40.312108 1614600 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 04:35:40.312120 1614600 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 04:35:40.312128 1614600 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 04:35:40.312137 1614600 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 04:35:40.312275 1614600 command_runner.go:130] > # log_size_max = -1
	I1209 04:35:40.312297 1614600 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 04:35:40.312305 1614600 command_runner.go:130] > # log_to_journald = false
	I1209 04:35:40.312312 1614600 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 04:35:40.312322 1614600 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 04:35:40.312328 1614600 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 04:35:40.312333 1614600 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 04:35:40.312338 1614600 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 04:35:40.312345 1614600 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 04:35:40.312351 1614600 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 04:35:40.312355 1614600 command_runner.go:130] > # read_only = false
	I1209 04:35:40.312361 1614600 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 04:35:40.312373 1614600 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 04:35:40.312378 1614600 command_runner.go:130] > # live configuration reload.
	I1209 04:35:40.312551 1614600 command_runner.go:130] > # log_level = "info"
	I1209 04:35:40.312568 1614600 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 04:35:40.312574 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.312578 1614600 command_runner.go:130] > # log_filter = ""
	I1209 04:35:40.312588 1614600 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312594 1614600 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 04:35:40.312600 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312614 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312622 1614600 command_runner.go:130] > # uid_mappings = ""
	I1209 04:35:40.312629 1614600 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312635 1614600 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 04:35:40.312644 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312652 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312657 1614600 command_runner.go:130] > # gid_mappings = ""
	I1209 04:35:40.312663 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 04:35:40.312670 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312676 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312689 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312694 1614600 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 04:35:40.312706 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 04:35:40.312713 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312719 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312730 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312735 1614600 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 04:35:40.312745 1614600 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 04:35:40.312753 1614600 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 04:35:40.312759 1614600 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 04:35:40.312763 1614600 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 04:35:40.312771 1614600 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 04:35:40.312781 1614600 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 04:35:40.312787 1614600 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 04:35:40.312792 1614600 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 04:35:40.312800 1614600 command_runner.go:130] > # drop_infra_ctr = true
	I1209 04:35:40.312807 1614600 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 04:35:40.312813 1614600 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 04:35:40.312825 1614600 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 04:35:40.312831 1614600 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 04:35:40.312838 1614600 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 04:35:40.312846 1614600 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 04:35:40.312852 1614600 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 04:35:40.312863 1614600 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 04:35:40.312871 1614600 command_runner.go:130] > # shared_cpuset = ""
	I1209 04:35:40.312877 1614600 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 04:35:40.312882 1614600 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 04:35:40.312891 1614600 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 04:35:40.312899 1614600 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 04:35:40.312903 1614600 command_runner.go:130] > # pinns_path = ""
	I1209 04:35:40.312908 1614600 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 04:35:40.312919 1614600 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 04:35:40.312924 1614600 command_runner.go:130] > # enable_criu_support = true
	I1209 04:35:40.312929 1614600 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 04:35:40.312936 1614600 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 04:35:40.312940 1614600 command_runner.go:130] > # enable_pod_events = false
	I1209 04:35:40.312948 1614600 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 04:35:40.312957 1614600 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 04:35:40.312962 1614600 command_runner.go:130] > # default_runtime = "crun"
	I1209 04:35:40.312967 1614600 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 04:35:40.312984 1614600 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 04:35:40.312997 1614600 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 04:35:40.313003 1614600 command_runner.go:130] > # creation as a file is not desired either.
	I1209 04:35:40.313011 1614600 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 04:35:40.313018 1614600 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 04:35:40.313023 1614600 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 04:35:40.313241 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.313258 1614600 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 04:35:40.313265 1614600 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 04:35:40.313271 1614600 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 04:35:40.313279 1614600 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 04:35:40.313282 1614600 command_runner.go:130] > #
	I1209 04:35:40.313287 1614600 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 04:35:40.313298 1614600 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 04:35:40.313303 1614600 command_runner.go:130] > # runtime_type = "oci"
	I1209 04:35:40.313307 1614600 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 04:35:40.313320 1614600 command_runner.go:130] > # inherit_default_runtime = false
	I1209 04:35:40.313326 1614600 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 04:35:40.313335 1614600 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 04:35:40.313340 1614600 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 04:35:40.313344 1614600 command_runner.go:130] > # monitor_env = []
	I1209 04:35:40.313349 1614600 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 04:35:40.313353 1614600 command_runner.go:130] > # allowed_annotations = []
	I1209 04:35:40.313359 1614600 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 04:35:40.313365 1614600 command_runner.go:130] > # no_sync_log = false
	I1209 04:35:40.313369 1614600 command_runner.go:130] > # default_annotations = {}
	I1209 04:35:40.313373 1614600 command_runner.go:130] > # stream_websockets = false
	I1209 04:35:40.313377 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.313410 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.313420 1614600 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 04:35:40.313427 1614600 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 04:35:40.313440 1614600 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 04:35:40.313446 1614600 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 04:35:40.313450 1614600 command_runner.go:130] > #   in $PATH.
	I1209 04:35:40.313457 1614600 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 04:35:40.313465 1614600 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 04:35:40.313471 1614600 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 04:35:40.313477 1614600 command_runner.go:130] > #   state.
	I1209 04:35:40.313484 1614600 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 04:35:40.313498 1614600 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 04:35:40.313505 1614600 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1209 04:35:40.313515 1614600 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1209 04:35:40.313521 1614600 command_runner.go:130] > #   the values from the default runtime on load time.
	I1209 04:35:40.313528 1614600 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 04:35:40.313537 1614600 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 04:35:40.313543 1614600 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 04:35:40.313550 1614600 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 04:35:40.313558 1614600 command_runner.go:130] > #   The currently recognized values are:
	I1209 04:35:40.313565 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 04:35:40.313575 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 04:35:40.313584 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 04:35:40.313591 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 04:35:40.313599 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 04:35:40.313611 1614600 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 04:35:40.313618 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 04:35:40.313632 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 04:35:40.313638 1614600 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 04:35:40.313644 1614600 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1209 04:35:40.313651 1614600 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1209 04:35:40.313662 1614600 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1209 04:35:40.313668 1614600 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1209 04:35:40.313674 1614600 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1209 04:35:40.313684 1614600 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1209 04:35:40.313693 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1209 04:35:40.313703 1614600 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 04:35:40.313707 1614600 command_runner.go:130] > #   deprecated option "conmon".
	I1209 04:35:40.313715 1614600 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 04:35:40.313721 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 04:35:40.313730 1614600 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 04:35:40.313735 1614600 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 04:35:40.313742 1614600 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1209 04:35:40.313752 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 04:35:40.313763 1614600 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1209 04:35:40.313771 1614600 command_runner.go:130] > #   conmon-rs by using:
	I1209 04:35:40.313779 1614600 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1209 04:35:40.313788 1614600 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1209 04:35:40.313799 1614600 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1209 04:35:40.313806 1614600 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 04:35:40.313811 1614600 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 04:35:40.313818 1614600 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1209 04:35:40.313825 1614600 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1209 04:35:40.313830 1614600 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1209 04:35:40.313842 1614600 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1209 04:35:40.313852 1614600 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1209 04:35:40.313860 1614600 command_runner.go:130] > #   when a machine crash happens.
	I1209 04:35:40.313868 1614600 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1209 04:35:40.313881 1614600 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1209 04:35:40.313889 1614600 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1209 04:35:40.313894 1614600 command_runner.go:130] > #   seccomp profile for the runtime.
	I1209 04:35:40.313900 1614600 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1209 04:35:40.313911 1614600 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1209 04:35:40.313915 1614600 command_runner.go:130] > #
	I1209 04:35:40.313919 1614600 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 04:35:40.313927 1614600 command_runner.go:130] > #
	I1209 04:35:40.313934 1614600 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 04:35:40.313942 1614600 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 04:35:40.313949 1614600 command_runner.go:130] > #
	I1209 04:35:40.313955 1614600 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 04:35:40.313962 1614600 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 04:35:40.313965 1614600 command_runner.go:130] > #
	I1209 04:35:40.313971 1614600 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 04:35:40.313974 1614600 command_runner.go:130] > # feature.
	I1209 04:35:40.313977 1614600 command_runner.go:130] > #
	I1209 04:35:40.313983 1614600 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 04:35:40.313992 1614600 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 04:35:40.314004 1614600 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 04:35:40.314014 1614600 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 04:35:40.314021 1614600 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 04:35:40.314029 1614600 command_runner.go:130] > #
	I1209 04:35:40.314036 1614600 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 04:35:40.314042 1614600 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 04:35:40.314045 1614600 command_runner.go:130] > #
	I1209 04:35:40.314051 1614600 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 04:35:40.314057 1614600 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 04:35:40.314063 1614600 command_runner.go:130] > #
	I1209 04:35:40.314070 1614600 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 04:35:40.314076 1614600 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 04:35:40.314083 1614600 command_runner.go:130] > # limitation.
	I1209 04:35:40.314088 1614600 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1209 04:35:40.314093 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1209 04:35:40.314104 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314108 1614600 command_runner.go:130] > runtime_root = "/run/crun"
	I1209 04:35:40.314112 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314120 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314124 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314130 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314134 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314138 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314142 1614600 command_runner.go:130] > allowed_annotations = [
	I1209 04:35:40.314152 1614600 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1209 04:35:40.314155 1614600 command_runner.go:130] > ]
	I1209 04:35:40.314159 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314164 1614600 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 04:35:40.314172 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1209 04:35:40.314177 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314181 1614600 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 04:35:40.314191 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314195 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314203 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314208 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314211 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314215 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314219 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314440 1614600 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 04:35:40.314455 1614600 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 04:35:40.314461 1614600 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 04:35:40.314470 1614600 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 04:35:40.314481 1614600 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1209 04:35:40.314491 1614600 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1209 04:35:40.314503 1614600 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1209 04:35:40.314509 1614600 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 04:35:40.314523 1614600 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 04:35:40.314532 1614600 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 04:35:40.314548 1614600 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 04:35:40.314556 1614600 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 04:35:40.314560 1614600 command_runner.go:130] > # Example:
	I1209 04:35:40.314565 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 04:35:40.314584 1614600 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 04:35:40.314596 1614600 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 04:35:40.314602 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 04:35:40.314611 1614600 command_runner.go:130] > # cpuset = "0-1"
	I1209 04:35:40.314615 1614600 command_runner.go:130] > # cpushares = "5"
	I1209 04:35:40.314619 1614600 command_runner.go:130] > # cpuquota = "1000"
	I1209 04:35:40.314623 1614600 command_runner.go:130] > # cpuperiod = "100000"
	I1209 04:35:40.314627 1614600 command_runner.go:130] > # cpulimit = "35"
	I1209 04:35:40.314630 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.314634 1614600 command_runner.go:130] > # The workload name is workload-type.
	I1209 04:35:40.314642 1614600 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 04:35:40.314651 1614600 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 04:35:40.314657 1614600 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 04:35:40.314665 1614600 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 04:35:40.314675 1614600 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 04:35:40.314680 1614600 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 04:35:40.314688 1614600 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 04:35:40.314695 1614600 command_runner.go:130] > # Default value is set to true
	I1209 04:35:40.314700 1614600 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 04:35:40.314706 1614600 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 04:35:40.314710 1614600 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 04:35:40.314715 1614600 command_runner.go:130] > # Default value is set to 'false'
	I1209 04:35:40.314719 1614600 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 04:35:40.314731 1614600 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1209 04:35:40.314740 1614600 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1209 04:35:40.314747 1614600 command_runner.go:130] > # timezone = ""
	I1209 04:35:40.314754 1614600 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 04:35:40.314757 1614600 command_runner.go:130] > #
	I1209 04:35:40.314763 1614600 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 04:35:40.314777 1614600 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1209 04:35:40.314781 1614600 command_runner.go:130] > [crio.image]
	I1209 04:35:40.314787 1614600 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 04:35:40.314791 1614600 command_runner.go:130] > # default_transport = "docker://"
	I1209 04:35:40.314797 1614600 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 04:35:40.314810 1614600 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314814 1614600 command_runner.go:130] > # global_auth_file = ""
	I1209 04:35:40.314819 1614600 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 04:35:40.314829 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314834 1614600 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.314841 1614600 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 04:35:40.314852 1614600 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314858 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314863 1614600 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 04:35:40.314868 1614600 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 04:35:40.314875 1614600 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 04:35:40.314888 1614600 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 04:35:40.314904 1614600 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 04:35:40.314909 1614600 command_runner.go:130] > # pause_command = "/pause"
	I1209 04:35:40.314915 1614600 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 04:35:40.314924 1614600 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 04:35:40.314931 1614600 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 04:35:40.314942 1614600 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 04:35:40.314949 1614600 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 04:35:40.314955 1614600 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 04:35:40.314959 1614600 command_runner.go:130] > # pinned_images = [
	I1209 04:35:40.314961 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.314968 1614600 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 04:35:40.314978 1614600 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 04:35:40.314984 1614600 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 04:35:40.314995 1614600 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 04:35:40.315001 1614600 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 04:35:40.315011 1614600 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1209 04:35:40.315023 1614600 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 04:35:40.315031 1614600 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 04:35:40.315037 1614600 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 04:35:40.315049 1614600 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 04:35:40.315055 1614600 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 04:35:40.315065 1614600 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 04:35:40.315071 1614600 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 04:35:40.315078 1614600 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 04:35:40.315086 1614600 command_runner.go:130] > # changing them here.
	I1209 04:35:40.315091 1614600 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1209 04:35:40.315095 1614600 command_runner.go:130] > # insecure_registries = [
	I1209 04:35:40.315099 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315108 1614600 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 04:35:40.315114 1614600 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 04:35:40.315319 1614600 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 04:35:40.315344 1614600 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 04:35:40.315350 1614600 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 04:35:40.315355 1614600 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1209 04:35:40.315362 1614600 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1209 04:35:40.315367 1614600 command_runner.go:130] > # auto_reload_registries = false
	I1209 04:35:40.315372 1614600 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1209 04:35:40.315381 1614600 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1209 04:35:40.315390 1614600 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1209 04:35:40.315399 1614600 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1209 04:35:40.315404 1614600 command_runner.go:130] > # The mode of short name resolution.
	I1209 04:35:40.315411 1614600 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1209 04:35:40.315422 1614600 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1209 04:35:40.315430 1614600 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1209 04:35:40.315434 1614600 command_runner.go:130] > # short_name_mode = "enforcing"
	I1209 04:35:40.315440 1614600 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1209 04:35:40.315446 1614600 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1209 04:35:40.315450 1614600 command_runner.go:130] > # oci_artifact_mount_support = true
	I1209 04:35:40.315456 1614600 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 04:35:40.315460 1614600 command_runner.go:130] > # CNI plugins.
	I1209 04:35:40.315463 1614600 command_runner.go:130] > [crio.network]
	I1209 04:35:40.315469 1614600 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 04:35:40.315475 1614600 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 04:35:40.315482 1614600 command_runner.go:130] > # cni_default_network = ""
	I1209 04:35:40.315488 1614600 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 04:35:40.315493 1614600 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 04:35:40.315503 1614600 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 04:35:40.315507 1614600 command_runner.go:130] > # plugin_dirs = [
	I1209 04:35:40.315515 1614600 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 04:35:40.315519 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315526 1614600 command_runner.go:130] > # List of included pod metrics.
	I1209 04:35:40.315530 1614600 command_runner.go:130] > # included_pod_metrics = [
	I1209 04:35:40.315533 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315539 1614600 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 04:35:40.315542 1614600 command_runner.go:130] > [crio.metrics]
	I1209 04:35:40.315547 1614600 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 04:35:40.315552 1614600 command_runner.go:130] > # enable_metrics = false
	I1209 04:35:40.315562 1614600 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 04:35:40.315567 1614600 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 04:35:40.315573 1614600 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 04:35:40.315587 1614600 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 04:35:40.315593 1614600 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 04:35:40.315601 1614600 command_runner.go:130] > # metrics_collectors = [
	I1209 04:35:40.315605 1614600 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 04:35:40.315610 1614600 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 04:35:40.315614 1614600 command_runner.go:130] > # 	"containers_oom_total",
	I1209 04:35:40.315617 1614600 command_runner.go:130] > # 	"processes_defunct",
	I1209 04:35:40.315621 1614600 command_runner.go:130] > # 	"operations_total",
	I1209 04:35:40.315626 1614600 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 04:35:40.315630 1614600 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 04:35:40.315635 1614600 command_runner.go:130] > # 	"operations_errors_total",
	I1209 04:35:40.315638 1614600 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 04:35:40.315642 1614600 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 04:35:40.315646 1614600 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 04:35:40.315651 1614600 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 04:35:40.315661 1614600 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 04:35:40.315666 1614600 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 04:35:40.315675 1614600 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 04:35:40.315849 1614600 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 04:35:40.315864 1614600 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1209 04:35:40.315868 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315880 1614600 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1209 04:35:40.315884 1614600 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1209 04:35:40.315889 1614600 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 04:35:40.315893 1614600 command_runner.go:130] > # metrics_port = 9090
	I1209 04:35:40.315899 1614600 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 04:35:40.315907 1614600 command_runner.go:130] > # metrics_socket = ""
	I1209 04:35:40.315912 1614600 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 04:35:40.315921 1614600 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 04:35:40.315929 1614600 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 04:35:40.315937 1614600 command_runner.go:130] > # certificate on any modification event.
	I1209 04:35:40.315944 1614600 command_runner.go:130] > # metrics_cert = ""
	I1209 04:35:40.315953 1614600 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 04:35:40.315959 1614600 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 04:35:40.315968 1614600 command_runner.go:130] > # metrics_key = ""
	I1209 04:35:40.315974 1614600 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 04:35:40.315982 1614600 command_runner.go:130] > [crio.tracing]
	I1209 04:35:40.315987 1614600 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 04:35:40.315996 1614600 command_runner.go:130] > # enable_tracing = false
	I1209 04:35:40.316002 1614600 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 04:35:40.316009 1614600 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1209 04:35:40.316017 1614600 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 04:35:40.316027 1614600 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 04:35:40.316032 1614600 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 04:35:40.316035 1614600 command_runner.go:130] > [crio.nri]
	I1209 04:35:40.316040 1614600 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 04:35:40.316043 1614600 command_runner.go:130] > # enable_nri = true
	I1209 04:35:40.316047 1614600 command_runner.go:130] > # NRI socket to listen on.
	I1209 04:35:40.316051 1614600 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 04:35:40.316055 1614600 command_runner.go:130] > # NRI plugin directory to use.
	I1209 04:35:40.316064 1614600 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 04:35:40.316069 1614600 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 04:35:40.316077 1614600 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 04:35:40.316083 1614600 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 04:35:40.316147 1614600 command_runner.go:130] > # nri_disable_connections = false
	I1209 04:35:40.316157 1614600 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 04:35:40.316162 1614600 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 04:35:40.316185 1614600 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 04:35:40.316193 1614600 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 04:35:40.316198 1614600 command_runner.go:130] > # NRI default validator configuration.
	I1209 04:35:40.316205 1614600 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1209 04:35:40.316215 1614600 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1209 04:35:40.316220 1614600 command_runner.go:130] > # can be restricted/rejected:
	I1209 04:35:40.316224 1614600 command_runner.go:130] > # - OCI hook injection
	I1209 04:35:40.316233 1614600 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1209 04:35:40.316238 1614600 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1209 04:35:40.316243 1614600 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1209 04:35:40.316247 1614600 command_runner.go:130] > # - adjustment of linux namespaces
	I1209 04:35:40.316254 1614600 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1209 04:35:40.316264 1614600 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1209 04:35:40.316271 1614600 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1209 04:35:40.316277 1614600 command_runner.go:130] > #
	I1209 04:35:40.316282 1614600 command_runner.go:130] > # [crio.nri.default_validator]
	I1209 04:35:40.316290 1614600 command_runner.go:130] > # nri_enable_default_validator = false
	I1209 04:35:40.316295 1614600 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1209 04:35:40.316307 1614600 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1209 04:35:40.316317 1614600 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1209 04:35:40.316322 1614600 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1209 04:35:40.316327 1614600 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1209 04:35:40.316480 1614600 command_runner.go:130] > # nri_validator_required_plugins = [
	I1209 04:35:40.316508 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.316521 1614600 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1209 04:35:40.316528 1614600 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 04:35:40.316540 1614600 command_runner.go:130] > [crio.stats]
	I1209 04:35:40.316546 1614600 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 04:35:40.316551 1614600 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 04:35:40.316555 1614600 command_runner.go:130] > # stats_collection_period = 0
	I1209 04:35:40.316562 1614600 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1209 04:35:40.316572 1614600 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1209 04:35:40.316577 1614600 command_runner.go:130] > # collection_period = 0
	I1209 04:35:40.318311 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282255082Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1209 04:35:40.318330 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.2822971Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1209 04:35:40.318340 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282328904Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1209 04:35:40.318349 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282355243Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1209 04:35:40.318358 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282430665Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:40.318367 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282713695Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1209 04:35:40.318382 1614600 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 04:35:40.318459 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:40.318484 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:40.318506 1614600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:35:40.318532 1614600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:35:40.318689 1614600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:35:40.318765 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:35:40.328360 1614600 command_runner.go:130] > kubeadm
	I1209 04:35:40.328381 1614600 command_runner.go:130] > kubectl
	I1209 04:35:40.328387 1614600 command_runner.go:130] > kubelet
	I1209 04:35:40.329285 1614600 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:35:40.329353 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:35:40.336944 1614600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:35:40.349970 1614600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:35:40.362809 1614600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1209 04:35:40.375503 1614600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:35:40.379345 1614600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:35:40.379778 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:40.502305 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:41.326409 1614600 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:35:41.326563 1614600 certs.go:195] generating shared ca certs ...
	I1209 04:35:41.326611 1614600 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:41.326833 1614600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:35:41.326887 1614600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:35:41.326895 1614600 certs.go:257] generating profile certs ...
	I1209 04:35:41.327067 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:35:41.327129 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:35:41.327233 1614600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:35:41.327250 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:35:41.327267 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:35:41.327279 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:35:41.327290 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:35:41.327349 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:35:41.327367 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:35:41.327413 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:35:41.327427 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:35:41.327509 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:35:41.327593 1614600 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:35:41.327604 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:35:41.327677 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:35:41.327750 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:35:41.327813 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:35:41.327913 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:41.327983 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.328001 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.328047 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.328720 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:35:41.349998 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:35:41.370613 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:35:41.391438 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:35:41.410483 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:35:41.429428 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:35:41.449234 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:35:41.468289 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:35:41.486148 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:35:41.504497 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:35:41.523111 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:35:41.542281 1614600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:35:41.555566 1614600 ssh_runner.go:195] Run: openssl version
	I1209 04:35:41.561986 1614600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:35:41.562090 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.569846 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:35:41.577817 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581778 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581849 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581927 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.622889 1614600 command_runner.go:130] > 51391683
	I1209 04:35:41.623441 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:35:41.630995 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.638454 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:35:41.646110 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649703 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649815 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649886 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.690940 1614600 command_runner.go:130] > 3ec20f2e
	I1209 04:35:41.691023 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:35:41.698710 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.705943 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:35:41.713451 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717157 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717250 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717310 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.757537 1614600 command_runner.go:130] > b5213941
	I1209 04:35:41.757976 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:35:41.765482 1614600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769213 1614600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769237 1614600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:35:41.769244 1614600 command_runner.go:130] > Device: 259,1	Inode: 1322432     Links: 1
	I1209 04:35:41.769251 1614600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:41.769256 1614600 command_runner.go:130] > Access: 2025-12-09 04:31:33.728838377 +0000
	I1209 04:35:41.769262 1614600 command_runner.go:130] > Modify: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769267 1614600 command_runner.go:130] > Change: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769272 1614600 command_runner.go:130] >  Birth: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769363 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:35:41.810027 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.810619 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:35:41.851168 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.851713 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:35:41.892758 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.892839 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:35:41.938176 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.938689 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:35:41.979665 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.980184 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:35:42.021167 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:42.021686 1614600 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:42.021825 1614600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:35:42.021936 1614600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:35:42.052115 1614600 cri.go:89] found id: ""
	I1209 04:35:42.052191 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:35:42.060116 1614600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:35:42.060196 1614600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:35:42.060220 1614600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:35:42.061227 1614600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:35:42.061247 1614600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:35:42.061342 1614600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:35:42.070417 1614600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:35:42.071064 1614600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.071256 1614600 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "functional-331811" cluster setting kubeconfig missing "functional-331811" context setting]
	I1209 04:35:42.071646 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.072159 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.072417 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.073140 1614600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:35:42.073224 1614600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:35:42.073266 1614600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:35:42.073391 1614600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:35:42.073418 1614600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:35:42.073437 1614600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:35:42.073813 1614600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:35:42.085766 1614600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:35:42.085868 1614600 kubeadm.go:602] duration metric: took 24.612846ms to restartPrimaryControlPlane
	I1209 04:35:42.085898 1614600 kubeadm.go:403] duration metric: took 64.220222ms to StartCluster
	I1209 04:35:42.085947 1614600 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.086095 1614600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.086834 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.087380 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:42.087524 1614600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:35:42.087628 1614600 addons.go:70] Setting storage-provisioner=true in profile "functional-331811"
	I1209 04:35:42.087691 1614600 addons.go:239] Setting addon storage-provisioner=true in "functional-331811"
	I1209 04:35:42.087740 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.088325 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.087482 1614600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:35:42.089019 1614600 addons.go:70] Setting default-storageclass=true in profile "functional-331811"
	I1209 04:35:42.089039 1614600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-331811"
	I1209 04:35:42.089353 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.092155 1614600 out.go:179] * Verifying Kubernetes components...
	I1209 04:35:42.095248 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:42.128430 1614600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:35:42.131623 1614600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.131651 1614600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:35:42.131731 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.147694 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.147902 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.148207 1614600 addons.go:239] Setting addon default-storageclass=true in "functional-331811"
	I1209 04:35:42.148248 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.148712 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.182846 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.193184 1614600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:42.193209 1614600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:35:42.193289 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.220341 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.327312 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:42.346850 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.376931 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.076226 1614600 node_ready.go:35] waiting up to 6m0s for node "functional-331811" to be "Ready" ...
	I1209 04:35:43.076344 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.076396 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.076607 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076635 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076655 1614600 retry.go:31] will retry after 310.700454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076685 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076702 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076708 1614600 retry.go:31] will retry after 282.763546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076773 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.360393 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.387801 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.432930 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.433022 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.433059 1614600 retry.go:31] will retry after 489.220325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460835 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.460941 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460967 1614600 retry.go:31] will retry after 355.931225ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.577252 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.577329 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.577711 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.817107 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.911473 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.915604 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.915640 1614600 retry.go:31] will retry after 537.488813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.922787 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.976592 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.980371 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.980407 1614600 retry.go:31] will retry after 753.380628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.077073 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.453574 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:44.512034 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.512090 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.512116 1614600 retry.go:31] will retry after 707.625417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.577247 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.577656 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.734008 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:44.795873 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.795936 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.795960 1614600 retry.go:31] will retry after 1.127913267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.077396 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.077480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.077910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:45.077993 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:45.220540 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:45.296909 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.296951 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.296996 1614600 retry.go:31] will retry after 917.152391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.577366 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.577441 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:45.924157 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:45.995176 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.995217 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.995239 1614600 retry.go:31] will retry after 1.420775217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.077446 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.077526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.077798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:46.215234 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:46.279745 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:46.279823 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.279850 1614600 retry.go:31] will retry after 1.336322791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.577242 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.577341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.577688 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.077361 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.077723 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.416255 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:47.477013 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.480365 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.480397 1614600 retry.go:31] will retry after 2.174557655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:47.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:47.617100 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:47.681529 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.681577 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.681598 1614600 retry.go:31] will retry after 3.276200411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:48.077115 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.077203 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.077555 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:48.577382 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.577481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.577821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.076798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.576988 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.655381 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:49.715000 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:49.715035 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:49.715054 1614600 retry.go:31] will retry after 3.337758974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:50.077421 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.077518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.077847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:50.077903 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:50.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.576630 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.576967 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:50.958720 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:51.022646 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:51.022681 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.022700 1614600 retry.go:31] will retry after 4.624703928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.077048 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.077142 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.077474 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:51.577259 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.577334 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.577661 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.076656 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:52.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:53.053753 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:53.077246 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.077324 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.077594 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:53.113242 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:53.113284 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.113306 1614600 retry.go:31] will retry after 2.734988542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.576526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.576551 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.577004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:54.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:55.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.076500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.076811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.576596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.648391 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:55.705094 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.708789 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.708820 1614600 retry.go:31] will retry after 6.736330921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.849034 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:55.918734 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.918780 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.918800 1614600 retry.go:31] will retry after 8.152075725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:56.077153 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.077636 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:56.577352 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.577427 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:56.577743 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:57.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.077499 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.077829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:57.576552 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.576635 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:59.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.076667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:59.077089 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:59.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.576805 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.076586 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.577002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.076587 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:01.576991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:02.077159 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.077237 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.077605 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:02.446164 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:02.502744 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:02.506462 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.506498 1614600 retry.go:31] will retry after 8.388840508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.576683 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.576758 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.577095 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.076604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.076977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.576704 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.576784 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.577119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:03.577179 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:04.071900 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:04.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.076869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:04.150537 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:04.154620 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.154650 1614600 retry.go:31] will retry after 8.078270125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.577310 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.577452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.577816 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.076556 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.576672 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.576950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:06.076647 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:06.077129 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:06.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.076823 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.076900 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.077209 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.577024 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.577097 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.577441 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:08.077262 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:08.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:08.577265 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.577344 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.577616 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.077482 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.077835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.576413 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.576503 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.076504 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.076593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.076887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.576575 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.576673 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.576991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:10.577053 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:10.895548 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:10.953462 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:10.957148 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:10.957180 1614600 retry.go:31] will retry after 18.757746695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:11.076395 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.076478 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.076772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.576513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.576815 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.076936 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.077309 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.233682 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:12.292817 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:12.296392 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.296423 1614600 retry.go:31] will retry after 20.023788924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.576943 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.577019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.577364 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:12.577421 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:13.077108 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.077239 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.077603 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:13.577256 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.577343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.077313 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.077412 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.077731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.576496 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.576774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:15.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:15.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:15.576474 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.076431 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.076783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.576609 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.576956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:17.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.077082 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.077457 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:17.077514 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:17.577068 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.577144 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.577409 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.077285 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.077383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.077755 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.076597 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.576602 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:19.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:20.076579 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.076658 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.076980 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:20.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.076594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.576638 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.576994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:22.077314 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.077388 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:22.077714 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:22.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.076595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.076934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.576705 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.577060 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.076759 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.076839 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.077254 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.576837 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.576916 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.577306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:24.577364 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:25.077118 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.077190 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.077463 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:25.577272 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.077487 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.077842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.576440 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.576779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:27.076863 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.076944 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.077310 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:27.077367 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:27.577163 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.577241 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.577580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.077311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.077379 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.577399 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.577473 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.577808 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.076514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.576577 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.576646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:29.576955 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:29.715418 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:29.773517 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:29.777518 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:29.777549 1614600 retry.go:31] will retry after 13.466249075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:30.077059 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.077150 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.077512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:30.577014 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.577100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.577433 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.077181 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.077268 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.077521 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.577348 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.577801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:31.577857 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:32.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.076806 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.077154 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:32.320502 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:32.377593 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:32.381870 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.381909 1614600 retry.go:31] will retry after 28.435049856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.577214 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.577283 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.577547 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.077429 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.077516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.077823 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.576632 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.576978 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:34.076485 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.076922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:34.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:34.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.576951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.076511 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.076628 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.076979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.076491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.076926 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.576535 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.576977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:36.577035 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:37.076803 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.076875 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.077215 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:37.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.577125 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.577459 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.077495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.077876 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.576668 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.576989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:39.076692 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.077121 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:39.077180 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:39.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.576532 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.576612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.077052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.576937 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:41.576987 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:42.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.076556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:42.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.576610 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.077002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.244488 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:43.308556 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:43.308599 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.308622 1614600 retry.go:31] will retry after 20.568808948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.577020 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.577099 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.577399 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:43.577456 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:44.077183 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.077280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.077609 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:44.577311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.577390 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.577747 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:45.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.076692 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.077821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1209 04:36:45.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.576880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:46.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.076531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.076837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:46.076889 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:46.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.576859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.076876 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.076949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.077253 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.577001 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.577079 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.577339 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:48.077087 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.077173 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.077495 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:48.077544 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:48.577135 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.577218 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.577531 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.077177 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.077507 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.577363 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.577806 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.076584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.576621 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.577013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:50.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:51.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.077123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:51.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.076970 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.077045 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.077314 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.577272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.577623 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:52.577685 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:53.076390 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.076468 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:53.577353 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.577714 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.076421 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.076508 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:55.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:55.077081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:55.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.577383 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.577451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.577701 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:57.076714 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.076787 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.077117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:57.077170 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:57.576491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.076535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.076850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.076574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.576972 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:59.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:00.076760 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.076863 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.077187 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.576907 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.576998 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.577391 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.817971 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:00.880147 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:00.880206 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:00.880224 1614600 retry.go:31] will retry after 16.46927575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:01.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.076543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.076797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:01.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.576960 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:02.076888 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.076961 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.077278 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:02.077329 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:02.576827 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.576905 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.577242 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.076885 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.077203 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.576886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.878560 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:37:03.937026 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940694 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940802 1614600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:04.077117 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.077194 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.077475 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:04.077526 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:04.577262 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.577353 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.577683 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.076432 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.076859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.576819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.576622 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.576698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.577017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:06.577081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:07.077327 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.077411 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.077929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:07.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.576583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.076696 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.077190 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.576870 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.576949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.577244 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:08.577297 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:09.077127 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.077205 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:09.577337 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.577415 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.577756 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.076539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.076863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:11.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:11.077056 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.576515 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.076918 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.077297 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.577105 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.577178 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.577483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:13.077233 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.077301 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.077597 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:13.077653 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:13.577407 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.577483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.577834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.076503 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.076512 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.076989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:15.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:16.076438 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.076844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:16.576547 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.576641 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.576975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.076966 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.077042 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.077390 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.349771 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:17.409388 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413192 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413302 1614600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:17.416242 1614600 out.go:179] * Enabled addons: 
	I1209 04:37:17.419770 1614600 addons.go:530] duration metric: took 1m35.33224358s for enable addons: enabled=[]
	I1209 04:37:17.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.576800 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:18.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.076914 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:18.076974 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:18.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.076683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:20.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.076704 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.077078 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:20.077138 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:20.576447 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.576514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.076557 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.076645 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:22.076971 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.077046 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.077320 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:22.077371 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:22.577119 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.577200 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.577508 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.077228 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.077302 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.077678 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.577301 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.577385 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.577646 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:24.077387 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.077467 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.077801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:24.077859 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:24.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.076516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.076845 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.576541 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.576634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.576434 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.576510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.576842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:26.576894 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:27.077363 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.077772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:27.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.076461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.076533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.576561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:29.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:29.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:29.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.576604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.576748 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.577148 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:31.577206 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:32.077358 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.077437 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.077778 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:32.576461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.076565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.576689 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.577020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:34.076719 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.076790 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.077129 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:34.077191 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:34.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.576554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.077045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.576555 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.576651 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.076520 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:36.576893 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:37.076700 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:37.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.576566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.576841 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:39.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:39.076952 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:39.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.076451 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.576873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:41.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.076918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:41.076977 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:41.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.576507 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.576804 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.076573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.076649 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.077059 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.576872 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.577233 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:43.077479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.077558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.077870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:43.077918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:43.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.076698 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.076780 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.077140 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.576789 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.576864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.576773 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.576852 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.577196 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:45.577268 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:46.076955 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.077032 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.077330 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:46.577091 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.577484 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.077355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.077435 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.077777 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.576343 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.576413 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.576709 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:48.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.076924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:48.076985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:48.576690 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.576786 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.577139 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.076493 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.076866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.576550 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.076580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.576825 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:50.576876 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:51.076471 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.076547 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.076833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:51.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.077018 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.077092 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.077387 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.577182 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.577265 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.577610 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:52.577668 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:53.077401 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.077481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.077828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:53.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.576594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.076956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.576530 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:55.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.076862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:55.076908 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:55.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.576971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.076686 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.076765 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.077127 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.576603 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.576676 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:57.077077 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.077153 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.077489 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:57.077549 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:57.577277 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.577362 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.076355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.076431 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.076691 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.576837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.076605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.576429 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.576509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:59.576883 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:00.076590 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.076684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.076994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:00.576859 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.576953 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.577331 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.077097 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.077171 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.077483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.577361 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.577744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:01.577806 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:02.076658 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.077088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:02.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.576881 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.076607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.076969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.577088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:04.076785 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.076860 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.077186 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:04.077249 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:04.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.576888 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.076606 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.077018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.576519 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.576866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.076961 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:06.576985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:07.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.076824 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.077084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:07.576464 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.076916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.576683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:09.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:09.077009 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:09.576680 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.576755 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.577084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.076530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.076842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.576484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:11.076596 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.076680 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:11.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:11.576395 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.576732 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.076887 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.076960 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.077284 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.577054 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:13.077220 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.077295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:13.077607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:13.577421 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.577504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.076618 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.076974 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.576647 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.576716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.577024 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.076742 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.076823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.077205 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.577458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:15.577510 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:16.076942 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.077018 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.077298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:16.577080 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.577154 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.577499 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.077222 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.077307 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.077621 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.577360 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.577430 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:17.577730 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:18.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.076948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:18.576659 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.576737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.577070 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.076847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.576553 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:20.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:20.077015 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:20.577408 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.577485 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.577743 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.076529 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.076872 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.576583 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.077118 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.077384 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:22.077433 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:22.577298 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.577383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.577762 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.076896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.076595 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.077017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.576721 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.577172 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:24.577228 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:25.076985 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.077057 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.077316 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:25.577081 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.577159 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.577525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.077428 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.077536 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.077886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.576422 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.576498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.576744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:27.076724 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.076800 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.077105 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:27.077166 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:27.576841 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.576921 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.577195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.576540 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.576965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.076687 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.077094 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:29.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:30.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.076902 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:30.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.576577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.076559 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.076951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:32.077036 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.077110 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.077432 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:32.077494 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:32.577246 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.577331 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.076404 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.076504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.076853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.577018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.076840 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:34.576950 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:35.076629 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.076710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.077057 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:35.576738 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.577124 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.076862 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.076938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.077291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.577100 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.577187 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.577528 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:36.577591 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:37.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.076511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.076779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:37.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.076577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.076985 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:39.076497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.076900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:39.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:39.576601 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.076482 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.076567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.076858 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.076880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.576426 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.576818 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:41.576870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:42.077124 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.077219 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:42.577244 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.577337 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.577684 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.077329 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.077428 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.077706 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.577356 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.577731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:43.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:44.077436 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.077527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.078002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:44.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.076652 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.077429 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.577123 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.577460 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:46.077241 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.077667 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:46.077724 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:46.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.576518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.076726 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.076801 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.077144 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.576574 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.576648 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.076715 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.077126 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.576849 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.576930 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.577268 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:48.577334 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:49.077051 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.077394 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:49.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.577582 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.077370 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.077454 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.077810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.576424 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.576502 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.576796 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:51.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.076910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:51.076969 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:51.576623 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.576749 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.577040 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.077085 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.077160 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.077422 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.577216 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.577295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.577613 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:53.077392 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.077475 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.077797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:53.077856 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:53.576362 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.576448 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.576718 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.076568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.576614 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.576695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.577055 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.076818 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.077132 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.576528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.576901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:55.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:56.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.077039 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:56.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.576717 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.076676 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.076750 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.077090 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.576453 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.576855 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:58.076528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:58.076991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:58.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.077015 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.576391 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.576721 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.076886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:00.577017 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:01.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.077021 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.077100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.577124 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.577217 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.577512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:02.577562 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:03.077323 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.077407 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.077775 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:03.576388 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.576462 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.576801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.076927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:05.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.076614 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.076965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:05.077020 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:05.576441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.576512 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.076541 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.076627 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.076963 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.576692 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.576772 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.577111 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:07.076853 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.076924 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.077177 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:07.077219 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:07.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.576580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.576924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.576669 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.576753 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.577117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:09.577174 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:10.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.076856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:10.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.576584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.576962 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.076664 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.077066 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.576687 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.576941 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:12.077176 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.077252 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:12.077711 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:12.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.576516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.576897 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.076570 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.076642 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.576510 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.576831 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:14.576881 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:15.076545 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.076624 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:15.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.076538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.076835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:16.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:17.076773 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.076853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.077193 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:17.576588 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.576661 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.576992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.076899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.576718 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:18.577182 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:19.076436 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.076822 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:19.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.576983 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:21.076622 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.077074 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:21.077128 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:21.576821 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.576903 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.577234 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.077298 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.077380 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.076525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.576738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:23.576788 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:24.076805 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.076886 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.077219 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:24.577078 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.577155 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.077571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.078098 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.576598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:25.576994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:26.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.076622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:26.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.076774 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.076849 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.077183 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.577039 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.577116 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.577462 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:27.577520 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:28.077111 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.077189 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:28.577185 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.577261 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.577578 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.077440 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.077520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.077849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.576538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:30.076538 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.076629 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.076998 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:30.077061 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:30.576517 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.076955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.576521 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.576916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:32.076880 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.076954 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.077270 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:32.077326 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:32.577067 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.577413 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.077278 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.077360 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.077744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.076561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.576597 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.576678 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.577003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:34.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:35.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.076882 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:35.576445 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.576524 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:37.076840 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.076915 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.077171 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:37.077211 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:37.576860 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.576938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.577250 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.077017 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.077094 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.077417 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.577139 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.577221 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.577485 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:39.077239 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.077314 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.077657 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:39.077722 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:39.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.576520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.576913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.576856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:41.576911 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:42.077042 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.077525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:42.577190 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.577607 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.077049 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:43.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:44.076789 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.076865 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.077206 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:44.576988 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.577058 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.577402 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.077482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.077593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.078175 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.577162 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.577633 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:45.577692 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:46.077289 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.077367 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.077631 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:46.577385 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.577458 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.577783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.076819 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.076895 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.077306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.577090 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.577430 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:48.077209 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.077287 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.077634 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:48.077694 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:48.576414 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.576492 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.576820 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.076429 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.076812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.076634 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.077027 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:50.576904 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:51.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.076954 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:51.576645 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.576720 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.577036 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.077317 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.077391 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.077666 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.576786 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:53.076496 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:53.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:53.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.576834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.076579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.576494 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.576894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.076446 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.076520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.076829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.576458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.576864 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:55.576922 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:56.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.077075 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:56.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.576684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.576957 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.076915 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.076989 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.077329 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.577142 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.577223 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.577545 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:57.577606 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:58.077310 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.077382 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:58.576398 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.576810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.076569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.576814 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:00.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.076707 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.077051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:00.077102 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:00.576782 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.576892 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.577341 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.077110 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.077469 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.577344 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:02.077046 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.077464 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:02.077524 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:02.577200 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.577280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.577554 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.077335 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.077410 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.077751 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.576927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.076986 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.576717 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.577167 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:04.577233 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:05.077000 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.077083 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.077407 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:05.577162 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.577240 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.577561 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.077371 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.077455 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.077846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.576686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.577045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:07.076867 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.076956 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.077237 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:07.077285 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:07.577031 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.577112 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.077143 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.077231 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.077595 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.577327 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.577403 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.577658 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.076843 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.576572 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.576654 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.577008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:09.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:10.076510 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.076592 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.076913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:10.576495 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.577096 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:11.577137 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:12.077236 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.077311 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.077690 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:12.576433 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.076474 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.076548 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.076826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.576589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.576934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:14.076645 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.076722 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:14.077105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:14.576455 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.076587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.076968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.576689 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.576770 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.577097 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.076527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.076791 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.576559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:16.576962 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:17.076730 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.076809 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.077145 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:17.576557 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:19.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.076498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:19.076870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:19.576490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:21.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.076558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.076898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:21.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:21.576671 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.576745 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.577092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.077101 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.077458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.577307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.577395 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.577780 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.076488 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.076566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.076905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.576667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:23.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:24.076716 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.076812 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.077201 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:24.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.577098 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.577427 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.077197 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.077272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.577396 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.577807 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:25.577866 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:26.076551 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.076646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:26.576462 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.576534 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.076897 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.077258 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.577061 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.577148 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:28.077203 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.077282 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.077580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:28.077625 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:28.576412 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.576489 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.576712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.576846 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.577180 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:30.577234 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:31.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.076979 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.077238 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:31.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.577496 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.077307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.077384 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.077722 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.576539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:33.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.076563 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.076911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:33.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:33.576529 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.576968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.076674 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.077041 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.576964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:35.076695 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.077151 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:35.077212 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:35.576777 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.576847 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.577114 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.076516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.076591 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.076925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.576862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.076779 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.076855 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.077112 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:37.576915 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:38.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.076570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:38.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.576523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.576653 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.576731 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.577063 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:39.577116 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:40.076444 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.076518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.076828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:40.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.576874 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.076569 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.077011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.576534 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.576619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:42.077395 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.077483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.077909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:42.078001 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:42.576664 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.576741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.577081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.076642 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.076576 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.076879 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.576597 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:44.576957 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:45.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.076615 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.077092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:45.576710 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.576785 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.577104 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.076542 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.076809 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:47.076788 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.076864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.077245 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:47.077300 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:47.576416 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.576497 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.576797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.076992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.576737 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.076532 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.076827 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.576585 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:49.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:50.076712 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.076793 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.077113 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:50.576457 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.576530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.576900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.076686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.077038 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.576760 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:51.577220 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:52.077315 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.077402 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.077698 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:52.576467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.576558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.576921 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.076656 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.076733 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.576420 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.576495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.576776 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:54.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:54.077005 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:54.576713 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.576789 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.577077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.076751 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.076830 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.077119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:56.076633 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:56.077057 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:56.576546 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.576885 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.076984 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.077287 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.577072 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.577156 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.577468 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:58.077202 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.077274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.077543 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:58.077586 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:58.577422 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.577500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.577833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.076973 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.576658 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.576742 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.577051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.076674 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.576861 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.576941 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.577298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:00.577372 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:01.077099 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.077168 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.077505 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:01.577309 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.577392 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.076372 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.076451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.076749 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.576406 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.576484 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:03.076591 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.076792 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.077195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:03.077250 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:03.576825 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.576906 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.577274 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.076812 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.076893 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.577138 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.577214 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.577536 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:05.077263 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.077343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.077665 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:05.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:05.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.576451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.576771 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.076554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.576878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.076816 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.077173 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.576865 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:07.576918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:08.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.076616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.077003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:08.576544 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.076893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.576500 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.576908 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:09.576964 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:10.076483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.076557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.076873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:10.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.077082 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.576454 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.576850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:12.077093 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.077172 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.077480 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:12.077535 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:12.577297 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.577374 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.577704 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.076405 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.076480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.076737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.076691 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:14.576999 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:15.076684 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.076776 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.077081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:15.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.576853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.577200 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.076920 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.576710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.577052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:16.577105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:17.076817 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:17.576383 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.576453 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.576788 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.076603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.076964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:18.577127 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:19.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.076761 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:19.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.077004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.576633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:21.076661 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.077147 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:21.077209 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:21.576890 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.576967 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.577291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.077436 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.577198 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.577279 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.577606 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.076378 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.076452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.076785 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.576403 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.576491 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:23.576864 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:24.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:24.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.076975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.576506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.576863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:25.576919 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:26.076427 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.076505 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:26.576569 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.076922 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.076997 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.077305 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.577103 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.577175 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.577550 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:27.577607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:28.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.077414 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.077671 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:28.577417 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.577513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.577846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.576905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:30.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.076966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:30.077050 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:30.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.576966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.076669 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.076744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.576502 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.576918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:32.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.077068 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.077435 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:32.077497 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:32.577199 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.577274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.577539 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.077339 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.077443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.077811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.576930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.076861 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.576657 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.577014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:34.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:35.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.076546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.076895 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:35.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.576553 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.577032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:37.076948 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.077019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.077352 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:37.077398 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:37.577132 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.577216 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.577592 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.077367 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.077444 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.077774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.576549 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.076517 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.076596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.576754 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.576834 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.577168 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:39.577222 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:40.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.076703 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.076991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:40.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.576891 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.577374 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.577738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:41.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:42.076410 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.076517 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.076959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:42.576665 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:43.076660 1614600 node_ready.go:38] duration metric: took 6m0.000391304s for node "functional-331811" to be "Ready" ...
	I1209 04:41:43.080060 1614600 out.go:203] 
	W1209 04:41:43.083006 1614600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:41:43.083030 1614600 out.go:285] * 
	W1209 04:41:43.085173 1614600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:41:43.088614 1614600 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991577379Z" level=info msg="Using the internal default seccomp profile"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991590368Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991596276Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991608206Z" level=info msg="RDT not available in the host system"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.991629581Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.992592059Z" level=info msg="Conmon does support the --sync option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.992622862Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.992647149Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.993492047Z" level=info msg="Conmon does support the --sync option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.993519452Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.993729366Z" level=info msg="Updated default CNI network name to "
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.994436293Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.995012996Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 09 04:35:39 functional-331811 crio[5392]: time="2025-12-09T04:35:39.995082403Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059745321Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059782253Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059831984Z" level=info msg="Create NRI interface"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059948186Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059957836Z" level=info msg="runtime interface created"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059972769Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.05997962Z" level=info msg="runtime interface starting up..."
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.05998607Z" level=info msg="starting plugins..."
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.059999329Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:35:40 functional-331811 crio[5392]: time="2025-12-09T04:35:40.060074998Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:35:40 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:41:47.754856    8796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:47.755622    8796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:47.757471    8796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:47.757976    8796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:47.759521    8796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:41:47 up  9:24,  0 user,  load average: 0.33, 0.32, 0.75
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:41:45 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:46 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 09 04:41:46 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:46 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:46 functional-331811 kubelet[8667]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:46 functional-331811 kubelet[8667]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:46 functional-331811 kubelet[8667]: E1209 04:41:46.152551    8667 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:46 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:46 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:46 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 09 04:41:46 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:46 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:46 functional-331811 kubelet[8703]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:46 functional-331811 kubelet[8703]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:46 functional-331811 kubelet[8703]: E1209 04:41:46.890196    8703 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:46 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:46 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:47 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 09 04:41:47 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:47 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:47 functional-331811 kubelet[8765]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:47 functional-331811 kubelet[8765]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:47 functional-331811 kubelet[8765]: E1209 04:41:47.634097    8765 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:47 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:47 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (375.680094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 kubectl -- --context functional-331811 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 kubectl -- --context functional-331811 get pods: exit status 1 (102.661371ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-331811 kubectl -- --context functional-331811 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (336.701914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 logs -n 25: (1.033046773s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-790468 image ls --format yaml --alsologtostderr                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls --format short --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh     │ functional-790468 ssh pgrep buildkitd                                                                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image   │ functional-790468 image ls --format json --alsologtostderr                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls --format table --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                            │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls                                                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete  │ -p functional-790468                                                                                                                              │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start   │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ start   │ -p functional-331811 --alsologtostderr -v=8                                                                                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:35 UTC │                     │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:latest                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add minikube-local-cache-test:functional-331811                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache delete minikube-local-cache-test:functional-331811                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl images                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ cache   │ functional-331811 cache reload                                                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ kubectl │ functional-331811 kubectl -- --context functional-331811 get pods                                                                                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:35:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:35:36.923741 1614600 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:35:36.923916 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.923926 1614600 out.go:374] Setting ErrFile to fd 2...
	I1209 04:35:36.923933 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.924200 1614600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:35:36.924580 1614600 out.go:368] Setting JSON to false
	I1209 04:35:36.925424 1614600 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33477,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:35:36.925503 1614600 start.go:143] virtualization:  
	I1209 04:35:36.929063 1614600 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:35:36.932800 1614600 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:35:36.932938 1614600 notify.go:221] Checking for updates...
	I1209 04:35:36.938644 1614600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:35:36.941493 1614600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:36.944366 1614600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:35:36.947167 1614600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:35:36.949981 1614600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:35:36.953271 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:36.953380 1614600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:35:36.980248 1614600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:35:36.980355 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.042703 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.032815271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.042820 1614600 docker.go:319] overlay module found
	I1209 04:35:37.045833 1614600 out.go:179] * Using the docker driver based on existing profile
	I1209 04:35:37.048621 1614600 start.go:309] selected driver: docker
	I1209 04:35:37.048647 1614600 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.048735 1614600 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:35:37.048847 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.101945 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.092778249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.102371 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:37.102446 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:37.102494 1614600 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.105799 1614600 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:35:37.108781 1614600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:35:37.111778 1614600 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:35:37.114815 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:37.114886 1614600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:35:37.114901 1614600 cache.go:65] Caching tarball of preloaded images
	I1209 04:35:37.114901 1614600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:35:37.114988 1614600 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:35:37.114998 1614600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:35:37.115114 1614600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:35:37.133782 1614600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:35:37.133805 1614600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:35:37.133825 1614600 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:35:37.133858 1614600 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:35:37.133920 1614600 start.go:364] duration metric: took 38.638µs to acquireMachinesLock for "functional-331811"
	I1209 04:35:37.133944 1614600 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:35:37.133953 1614600 fix.go:54] fixHost starting: 
	I1209 04:35:37.134223 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:37.151389 1614600 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:35:37.151428 1614600 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:35:37.154776 1614600 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:35:37.154815 1614600 machine.go:94] provisionDockerMachine start ...
	I1209 04:35:37.154907 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.171646 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.171972 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.171985 1614600 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:35:37.327745 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.327810 1614600 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:35:37.327896 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.347228 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.347562 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.347574 1614600 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:35:37.512164 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.512262 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.529769 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.530100 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.530124 1614600 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:35:37.682808 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:35:37.682838 1614600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:35:37.682870 1614600 ubuntu.go:190] setting up certificates
	I1209 04:35:37.682895 1614600 provision.go:84] configureAuth start
	I1209 04:35:37.682958 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:37.700930 1614600 provision.go:143] copyHostCerts
	I1209 04:35:37.700976 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701008 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:35:37.701021 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701094 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:35:37.701192 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701215 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:35:37.701230 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701259 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:35:37.701304 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701324 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:35:37.701331 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701357 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:35:37.701411 1614600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:35:37.907915 1614600 provision.go:177] copyRemoteCerts
	I1209 04:35:37.907981 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:35:37.908038 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.925118 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.031668 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:35:38.031745 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:35:38.051846 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:35:38.051953 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:35:38.075178 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:35:38.075249 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:35:38.102039 1614600 provision.go:87] duration metric: took 419.115897ms to configureAuth
	I1209 04:35:38.102117 1614600 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:35:38.102384 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:38.102539 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.125059 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:38.125376 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:38.125391 1614600 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:35:38.471803 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:35:38.471824 1614600 machine.go:97] duration metric: took 1.317001735s to provisionDockerMachine
	I1209 04:35:38.471836 1614600 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:35:38.471848 1614600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:35:38.471925 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:35:38.471961 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.490918 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.598660 1614600 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:35:38.602109 1614600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:35:38.602129 1614600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:35:38.602133 1614600 command_runner.go:130] > VERSION_ID="12"
	I1209 04:35:38.602137 1614600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:35:38.602143 1614600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:35:38.602146 1614600 command_runner.go:130] > ID=debian
	I1209 04:35:38.602151 1614600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:35:38.602156 1614600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:35:38.602162 1614600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:35:38.602263 1614600 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:35:38.602312 1614600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:35:38.602329 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:35:38.602392 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:35:38.602478 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:35:38.602488 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:35:38.602561 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:35:38.602585 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> /etc/test/nested/copy/1580521/hosts
	I1209 04:35:38.602639 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:35:38.610143 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:38.627602 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:35:38.644510 1614600 start.go:296] duration metric: took 172.65884ms for postStartSetup
	I1209 04:35:38.644590 1614600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:35:38.644638 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.661666 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.763521 1614600 command_runner.go:130] > 14%
	I1209 04:35:38.763600 1614600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:35:38.767910 1614600 command_runner.go:130] > 169G
	I1209 04:35:38.768419 1614600 fix.go:56] duration metric: took 1.634462107s for fixHost
	I1209 04:35:38.768442 1614600 start.go:83] releasing machines lock for "functional-331811", held for 1.634508761s
	I1209 04:35:38.768510 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:38.785686 1614600 ssh_runner.go:195] Run: cat /version.json
	I1209 04:35:38.785708 1614600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:35:38.785735 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.785760 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.812264 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.824669 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.938034 1614600 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:35:38.938167 1614600 ssh_runner.go:195] Run: systemctl --version
	I1209 04:35:39.026186 1614600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:35:39.029038 1614600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:35:39.029075 1614600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:35:39.029143 1614600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:35:39.066886 1614600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:35:39.071437 1614600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:35:39.071476 1614600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:35:39.071539 1614600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:35:39.079896 1614600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:35:39.079922 1614600 start.go:496] detecting cgroup driver to use...
	I1209 04:35:39.079956 1614600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:35:39.080020 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:35:39.095690 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:35:39.109020 1614600 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:35:39.109092 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:35:39.124696 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:35:39.138081 1614600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:35:39.247127 1614600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:35:39.364113 1614600 docker.go:234] disabling docker service ...
	I1209 04:35:39.364202 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:35:39.381227 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:35:39.394458 1614600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:35:39.513409 1614600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:35:39.656760 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:35:39.669700 1614600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:35:39.682849 1614600 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 04:35:39.684261 1614600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:35:39.684369 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.693327 1614600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:35:39.693420 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.702710 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.711893 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.720974 1614600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:35:39.729134 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.738010 1614600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.746818 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.757592 1614600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:35:39.764510 1614600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:35:39.765518 1614600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:35:39.773280 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:39.885186 1614600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:35:40.065444 1614600 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:35:40.065521 1614600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:35:40.069680 1614600 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 04:35:40.069719 1614600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:35:40.069751 1614600 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1209 04:35:40.069764 1614600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:40.069773 1614600 command_runner.go:130] > Access: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069780 1614600 command_runner.go:130] > Modify: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069788 1614600 command_runner.go:130] > Change: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069792 1614600 command_runner.go:130] >  Birth: -
	I1209 04:35:40.069850 1614600 start.go:564] Will wait 60s for crictl version
	I1209 04:35:40.069925 1614600 ssh_runner.go:195] Run: which crictl
	I1209 04:35:40.073554 1614600 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:35:40.073791 1614600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:35:40.095945 1614600 command_runner.go:130] > Version:  0.1.0
	I1209 04:35:40.096030 1614600 command_runner.go:130] > RuntimeName:  cri-o
	I1209 04:35:40.096051 1614600 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1209 04:35:40.096074 1614600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:35:40.098378 1614600 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:35:40.098514 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.127067 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.127092 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.127099 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.127105 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.127110 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.127137 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.127156 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.127168 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.127172 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.127180 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.127185 1614600 command_runner.go:130] >      static
	I1209 04:35:40.127194 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.127198 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.127227 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.127238 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.127242 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.127250 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.127255 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.127262 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.127267 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.129252 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.157358 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.157406 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.157412 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.157417 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.157423 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.157427 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.157432 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.157472 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.157484 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.157489 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.157492 1614600 command_runner.go:130] >      static
	I1209 04:35:40.157496 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.157508 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.157512 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.157516 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.157547 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.157557 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.157562 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.157567 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.157573 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.164627 1614600 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:35:40.167496 1614600 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:35:40.183934 1614600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:35:40.187985 1614600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:35:40.188113 1614600 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:35:40.188232 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:40.188297 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.225616 1614600 command_runner.go:130] > {
	I1209 04:35:40.225636 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.225641 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225650 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.225655 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225670 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.225673 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225678 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225687 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.225695 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.225699 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225704 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.225711 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225716 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225719 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225723 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225729 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.225733 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225738 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.225742 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225751 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225760 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.225769 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.225773 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225777 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.225781 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225789 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225792 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225795 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225802 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.225806 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225811 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.225814 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225818 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225826 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.225835 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.225838 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225842 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.225847 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.225851 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225854 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225857 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225864 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.225868 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225872 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.225881 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225885 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225897 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.225905 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.225909 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225913 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.225916 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225920 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225923 1614600 command_runner.go:130] >       },
	I1209 04:35:40.225931 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225936 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225939 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225942 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225949 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.225953 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225958 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.225961 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225965 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225973 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.225981 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.225983 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225987 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.225991 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225995 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225998 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226001 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226005 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226008 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226011 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226018 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.226021 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226027 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.226030 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226037 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226045 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.226054 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.226057 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226062 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.226065 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226069 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226072 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226076 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226080 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226082 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226085 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226092 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.226096 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226101 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.226104 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226108 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226115 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.226123 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.226126 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226130 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.226133 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226137 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226140 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226143 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226149 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.226153 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226159 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.226162 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226166 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226174 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.226196 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.226200 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226207 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.226210 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226214 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226218 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226222 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226226 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226228 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226232 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226238 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.226242 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226246 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.226249 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226253 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226261 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.226269 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.226273 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226277 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.226280 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226284 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.226288 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226294 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226297 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.226301 1614600 command_runner.go:130] >     }
	I1209 04:35:40.226303 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.226307 1614600 command_runner.go:130] > }
	I1209 04:35:40.228010 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.228035 1614600 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:35:40.228091 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.253311 1614600 command_runner.go:130] > {
	I1209 04:35:40.253331 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.253335 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253349 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.253353 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253360 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.253363 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253367 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253375 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.253383 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.253386 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253391 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.253395 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253400 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253403 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253406 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253412 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.253416 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253421 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.253425 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253429 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253437 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.253445 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.253449 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253453 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.253457 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253463 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253466 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253469 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253476 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.253480 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253485 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.253489 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253492 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253500 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.253508 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.253515 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253519 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.253523 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.253528 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253531 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253534 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253540 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.253544 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253549 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.253553 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253557 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253564 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.253571 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.253574 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253578 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.253581 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253585 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253592 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253600 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253604 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253607 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253611 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253617 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.253621 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253626 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.253629 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253633 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253641 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.253649 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.253651 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253655 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.253659 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253662 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253669 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253672 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253676 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253679 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253682 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253688 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.253691 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253698 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.253701 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253704 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253713 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.253721 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.253724 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253728 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.253731 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253735 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253738 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253742 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253745 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253748 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253751 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253758 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.253762 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253767 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.253770 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253773 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253781 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.253789 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.253792 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253795 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.253799 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253803 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253806 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253812 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253819 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.253823 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253828 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.253831 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253835 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253843 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.253860 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.253863 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253867 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.253870 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253874 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253877 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253881 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253884 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253887 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253890 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253896 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.253900 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253905 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.253908 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253912 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253919 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.253926 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.253929 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253934 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.253937 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253941 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.253944 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253948 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253952 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.253955 1614600 command_runner.go:130] >     }
	I1209 04:35:40.253958 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.253965 1614600 command_runner.go:130] > }
	I1209 04:35:40.254095 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.254103 1614600 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:35:40.254110 1614600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:35:40.254208 1614600 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:35:40.254292 1614600 ssh_runner.go:195] Run: crio config
	I1209 04:35:40.303771 1614600 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 04:35:40.303802 1614600 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 04:35:40.303810 1614600 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 04:35:40.303813 1614600 command_runner.go:130] > #
	I1209 04:35:40.303821 1614600 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 04:35:40.303827 1614600 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 04:35:40.303834 1614600 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 04:35:40.303844 1614600 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 04:35:40.303848 1614600 command_runner.go:130] > # reload'.
	I1209 04:35:40.303854 1614600 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 04:35:40.303865 1614600 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 04:35:40.303872 1614600 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 04:35:40.303882 1614600 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 04:35:40.303886 1614600 command_runner.go:130] > [crio]
	I1209 04:35:40.303892 1614600 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 04:35:40.303900 1614600 command_runner.go:130] > # containers images, in this directory.
	I1209 04:35:40.304039 1614600 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1209 04:35:40.304055 1614600 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 04:35:40.304161 1614600 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1209 04:35:40.304178 1614600 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 04:35:40.304429 1614600 command_runner.go:130] > # imagestore = ""
	I1209 04:35:40.304453 1614600 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 04:35:40.304461 1614600 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 04:35:40.304691 1614600 command_runner.go:130] > # storage_driver = "overlay"
	I1209 04:35:40.304703 1614600 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 04:35:40.304710 1614600 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 04:35:40.304804 1614600 command_runner.go:130] > # storage_option = [
	I1209 04:35:40.305009 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.305024 1614600 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 04:35:40.305032 1614600 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 04:35:40.305284 1614600 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 04:35:40.305301 1614600 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 04:35:40.305327 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 04:35:40.305337 1614600 command_runner.go:130] > # always happen on a node reboot
	I1209 04:35:40.305502 1614600 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 04:35:40.305532 1614600 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 04:35:40.305540 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 04:35:40.305547 1614600 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 04:35:40.305748 1614600 command_runner.go:130] > # version_file_persist = ""
	I1209 04:35:40.305764 1614600 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 04:35:40.305775 1614600 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 04:35:40.306057 1614600 command_runner.go:130] > # internal_wipe = true
	I1209 04:35:40.306082 1614600 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 04:35:40.306090 1614600 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 04:35:40.306271 1614600 command_runner.go:130] > # internal_repair = true
	I1209 04:35:40.306293 1614600 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 04:35:40.306300 1614600 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 04:35:40.306308 1614600 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 04:35:40.306632 1614600 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 04:35:40.306647 1614600 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 04:35:40.306651 1614600 command_runner.go:130] > [crio.api]
	I1209 04:35:40.306663 1614600 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 04:35:40.306916 1614600 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 04:35:40.306934 1614600 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 04:35:40.307148 1614600 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 04:35:40.307163 1614600 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 04:35:40.307169 1614600 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 04:35:40.307396 1614600 command_runner.go:130] > # stream_port = "0"
	I1209 04:35:40.307416 1614600 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 04:35:40.307661 1614600 command_runner.go:130] > # stream_enable_tls = false
	I1209 04:35:40.307682 1614600 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 04:35:40.307871 1614600 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 04:35:40.307887 1614600 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 04:35:40.307900 1614600 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308079 1614600 command_runner.go:130] > # stream_tls_cert = ""
	I1209 04:35:40.308090 1614600 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 04:35:40.308097 1614600 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308297 1614600 command_runner.go:130] > # stream_tls_key = ""
	I1209 04:35:40.308313 1614600 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 04:35:40.308326 1614600 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 04:35:40.308345 1614600 command_runner.go:130] > # automatically pick up the changes.
	I1209 04:35:40.308572 1614600 command_runner.go:130] > # stream_tls_ca = ""
	I1209 04:35:40.308610 1614600 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.308814 1614600 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1209 04:35:40.308835 1614600 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.309085 1614600 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1209 04:35:40.309103 1614600 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 04:35:40.309115 1614600 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 04:35:40.309119 1614600 command_runner.go:130] > [crio.runtime]
	I1209 04:35:40.309126 1614600 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 04:35:40.309132 1614600 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 04:35:40.309143 1614600 command_runner.go:130] > # "nofile=1024:2048"
	I1209 04:35:40.309150 1614600 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 04:35:40.309302 1614600 command_runner.go:130] > # default_ulimits = [
	I1209 04:35:40.309485 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.309504 1614600 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 04:35:40.309688 1614600 command_runner.go:130] > # no_pivot = false
	I1209 04:35:40.309706 1614600 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 04:35:40.309713 1614600 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 04:35:40.310551 1614600 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 04:35:40.310598 1614600 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 04:35:40.310608 1614600 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 04:35:40.310618 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310767 1614600 command_runner.go:130] > # conmon = ""
	I1209 04:35:40.310786 1614600 command_runner.go:130] > # Cgroup setting for conmon
	I1209 04:35:40.310795 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 04:35:40.310806 1614600 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 04:35:40.310814 1614600 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 04:35:40.310835 1614600 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 04:35:40.310842 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310849 1614600 command_runner.go:130] > # conmon_env = [
	I1209 04:35:40.310857 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310866 1614600 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 04:35:40.310873 1614600 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 04:35:40.310879 1614600 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 04:35:40.310886 1614600 command_runner.go:130] > # default_env = [
	I1209 04:35:40.310889 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310895 1614600 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 04:35:40.310907 1614600 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 04:35:40.310914 1614600 command_runner.go:130] > # selinux = false
	I1209 04:35:40.310925 1614600 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 04:35:40.310933 1614600 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1209 04:35:40.310938 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310944 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.310954 1614600 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1209 04:35:40.310963 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310968 1614600 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1209 04:35:40.310974 1614600 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 04:35:40.310984 1614600 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 04:35:40.310991 1614600 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 04:35:40.311002 1614600 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 04:35:40.311007 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311011 1614600 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 04:35:40.311017 1614600 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 04:35:40.311022 1614600 command_runner.go:130] > # the cgroup blockio controller.
	I1209 04:35:40.311028 1614600 command_runner.go:130] > # blockio_config_file = ""
	I1209 04:35:40.311035 1614600 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 04:35:40.311042 1614600 command_runner.go:130] > # blockio parameters.
	I1209 04:35:40.311046 1614600 command_runner.go:130] > # blockio_reload = false
	I1209 04:35:40.311059 1614600 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 04:35:40.311064 1614600 command_runner.go:130] > # irqbalance daemon.
	I1209 04:35:40.311073 1614600 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 04:35:40.311083 1614600 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 04:35:40.311091 1614600 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 04:35:40.311107 1614600 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 04:35:40.311272 1614600 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 04:35:40.311287 1614600 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 04:35:40.311293 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311441 1614600 command_runner.go:130] > # rdt_config_file = ""
	I1209 04:35:40.311462 1614600 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 04:35:40.311467 1614600 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 04:35:40.311477 1614600 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 04:35:40.311487 1614600 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 04:35:40.311493 1614600 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 04:35:40.311505 1614600 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 04:35:40.311514 1614600 command_runner.go:130] > # will be added.
	I1209 04:35:40.311522 1614600 command_runner.go:130] > # default_capabilities = [
	I1209 04:35:40.311525 1614600 command_runner.go:130] > # 	"CHOWN",
	I1209 04:35:40.311531 1614600 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 04:35:40.311535 1614600 command_runner.go:130] > # 	"FSETID",
	I1209 04:35:40.311541 1614600 command_runner.go:130] > # 	"FOWNER",
	I1209 04:35:40.311545 1614600 command_runner.go:130] > # 	"SETGID",
	I1209 04:35:40.311548 1614600 command_runner.go:130] > # 	"SETUID",
	I1209 04:35:40.311573 1614600 command_runner.go:130] > # 	"SETPCAP",
	I1209 04:35:40.311581 1614600 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 04:35:40.311585 1614600 command_runner.go:130] > # 	"KILL",
	I1209 04:35:40.311752 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311769 1614600 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 04:35:40.311777 1614600 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 04:35:40.311784 1614600 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 04:35:40.311790 1614600 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 04:35:40.311796 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311802 1614600 command_runner.go:130] > default_sysctls = [
	I1209 04:35:40.311807 1614600 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 04:35:40.311811 1614600 command_runner.go:130] > ]
	I1209 04:35:40.311823 1614600 command_runner.go:130] > # List of devices on the host that a
	I1209 04:35:40.311829 1614600 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 04:35:40.311833 1614600 command_runner.go:130] > # allowed_devices = [
	I1209 04:35:40.311843 1614600 command_runner.go:130] > # 	"/dev/fuse",
	I1209 04:35:40.311847 1614600 command_runner.go:130] > # 	"/dev/net/tun",
	I1209 04:35:40.311851 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311856 1614600 command_runner.go:130] > # List of additional devices. specified as
	I1209 04:35:40.311863 1614600 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 04:35:40.311870 1614600 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 04:35:40.311876 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311883 1614600 command_runner.go:130] > # additional_devices = [
	I1209 04:35:40.311886 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311896 1614600 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 04:35:40.311900 1614600 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 04:35:40.311903 1614600 command_runner.go:130] > # 	"/etc/cdi",
	I1209 04:35:40.311908 1614600 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 04:35:40.311916 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311923 1614600 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 04:35:40.311929 1614600 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 04:35:40.311936 1614600 command_runner.go:130] > # Defaults to false.
	I1209 04:35:40.311942 1614600 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 04:35:40.311958 1614600 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 04:35:40.311969 1614600 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 04:35:40.311973 1614600 command_runner.go:130] > # hooks_dir = [
	I1209 04:35:40.311980 1614600 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 04:35:40.311986 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311992 1614600 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 04:35:40.312007 1614600 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 04:35:40.312013 1614600 command_runner.go:130] > # its default mounts from the following two files:
	I1209 04:35:40.312021 1614600 command_runner.go:130] > #
	I1209 04:35:40.312027 1614600 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 04:35:40.312034 1614600 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 04:35:40.312039 1614600 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 04:35:40.312045 1614600 command_runner.go:130] > #
	I1209 04:35:40.312051 1614600 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 04:35:40.312057 1614600 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 04:35:40.312065 1614600 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 04:35:40.312074 1614600 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 04:35:40.312077 1614600 command_runner.go:130] > #
	I1209 04:35:40.312081 1614600 command_runner.go:130] > # default_mounts_file = ""
	I1209 04:35:40.312087 1614600 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 04:35:40.312097 1614600 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 04:35:40.312102 1614600 command_runner.go:130] > # pids_limit = -1
	I1209 04:35:40.312108 1614600 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 04:35:40.312120 1614600 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 04:35:40.312128 1614600 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 04:35:40.312137 1614600 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 04:35:40.312275 1614600 command_runner.go:130] > # log_size_max = -1
	I1209 04:35:40.312297 1614600 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 04:35:40.312305 1614600 command_runner.go:130] > # log_to_journald = false
	I1209 04:35:40.312312 1614600 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 04:35:40.312322 1614600 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 04:35:40.312328 1614600 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 04:35:40.312333 1614600 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 04:35:40.312338 1614600 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 04:35:40.312345 1614600 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 04:35:40.312351 1614600 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 04:35:40.312355 1614600 command_runner.go:130] > # read_only = false
	I1209 04:35:40.312361 1614600 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 04:35:40.312373 1614600 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 04:35:40.312378 1614600 command_runner.go:130] > # live configuration reload.
	I1209 04:35:40.312551 1614600 command_runner.go:130] > # log_level = "info"
	I1209 04:35:40.312568 1614600 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 04:35:40.312574 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.312578 1614600 command_runner.go:130] > # log_filter = ""
	I1209 04:35:40.312588 1614600 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312594 1614600 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 04:35:40.312600 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312614 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312622 1614600 command_runner.go:130] > # uid_mappings = ""
	I1209 04:35:40.312629 1614600 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312635 1614600 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 04:35:40.312644 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312652 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312657 1614600 command_runner.go:130] > # gid_mappings = ""
	I1209 04:35:40.312663 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 04:35:40.312670 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312676 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312689 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312694 1614600 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 04:35:40.312706 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 04:35:40.312713 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312719 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312730 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312735 1614600 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 04:35:40.312745 1614600 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 04:35:40.312753 1614600 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 04:35:40.312759 1614600 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 04:35:40.312763 1614600 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 04:35:40.312771 1614600 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 04:35:40.312781 1614600 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 04:35:40.312787 1614600 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 04:35:40.312792 1614600 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 04:35:40.312800 1614600 command_runner.go:130] > # drop_infra_ctr = true
	I1209 04:35:40.312807 1614600 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 04:35:40.312813 1614600 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 04:35:40.312825 1614600 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 04:35:40.312831 1614600 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 04:35:40.312838 1614600 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 04:35:40.312846 1614600 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 04:35:40.312852 1614600 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 04:35:40.312863 1614600 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 04:35:40.312871 1614600 command_runner.go:130] > # shared_cpuset = ""
	I1209 04:35:40.312877 1614600 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 04:35:40.312882 1614600 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 04:35:40.312891 1614600 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 04:35:40.312899 1614600 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 04:35:40.312903 1614600 command_runner.go:130] > # pinns_path = ""
	I1209 04:35:40.312908 1614600 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 04:35:40.312919 1614600 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 04:35:40.312924 1614600 command_runner.go:130] > # enable_criu_support = true
	I1209 04:35:40.312929 1614600 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 04:35:40.312936 1614600 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 04:35:40.312940 1614600 command_runner.go:130] > # enable_pod_events = false
	I1209 04:35:40.312948 1614600 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 04:35:40.312957 1614600 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 04:35:40.312962 1614600 command_runner.go:130] > # default_runtime = "crun"
	I1209 04:35:40.312967 1614600 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 04:35:40.312984 1614600 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 04:35:40.312997 1614600 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 04:35:40.313003 1614600 command_runner.go:130] > # creation as a file is not desired either.
	I1209 04:35:40.313011 1614600 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 04:35:40.313018 1614600 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 04:35:40.313023 1614600 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 04:35:40.313241 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.313258 1614600 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 04:35:40.313265 1614600 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 04:35:40.313271 1614600 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 04:35:40.313279 1614600 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 04:35:40.313282 1614600 command_runner.go:130] > #
	I1209 04:35:40.313287 1614600 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 04:35:40.313298 1614600 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 04:35:40.313303 1614600 command_runner.go:130] > # runtime_type = "oci"
	I1209 04:35:40.313307 1614600 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 04:35:40.313320 1614600 command_runner.go:130] > # inherit_default_runtime = false
	I1209 04:35:40.313326 1614600 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 04:35:40.313335 1614600 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 04:35:40.313340 1614600 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 04:35:40.313344 1614600 command_runner.go:130] > # monitor_env = []
	I1209 04:35:40.313349 1614600 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 04:35:40.313353 1614600 command_runner.go:130] > # allowed_annotations = []
	I1209 04:35:40.313359 1614600 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 04:35:40.313365 1614600 command_runner.go:130] > # no_sync_log = false
	I1209 04:35:40.313369 1614600 command_runner.go:130] > # default_annotations = {}
	I1209 04:35:40.313373 1614600 command_runner.go:130] > # stream_websockets = false
	I1209 04:35:40.313377 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.313410 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.313420 1614600 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 04:35:40.313427 1614600 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 04:35:40.313440 1614600 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 04:35:40.313446 1614600 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 04:35:40.313450 1614600 command_runner.go:130] > #   in $PATH.
	I1209 04:35:40.313457 1614600 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 04:35:40.313465 1614600 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 04:35:40.313471 1614600 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 04:35:40.313477 1614600 command_runner.go:130] > #   state.
	I1209 04:35:40.313484 1614600 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 04:35:40.313498 1614600 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 04:35:40.313505 1614600 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1209 04:35:40.313515 1614600 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1209 04:35:40.313521 1614600 command_runner.go:130] > #   the values from the default runtime on load time.
	I1209 04:35:40.313528 1614600 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 04:35:40.313537 1614600 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 04:35:40.313543 1614600 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 04:35:40.313550 1614600 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 04:35:40.313558 1614600 command_runner.go:130] > #   The currently recognized values are:
	I1209 04:35:40.313565 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 04:35:40.313575 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 04:35:40.313584 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 04:35:40.313591 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 04:35:40.313599 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 04:35:40.313611 1614600 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 04:35:40.313618 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 04:35:40.313632 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 04:35:40.313638 1614600 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 04:35:40.313644 1614600 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1209 04:35:40.313651 1614600 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1209 04:35:40.313662 1614600 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1209 04:35:40.313668 1614600 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1209 04:35:40.313674 1614600 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1209 04:35:40.313684 1614600 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1209 04:35:40.313693 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1209 04:35:40.313703 1614600 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 04:35:40.313707 1614600 command_runner.go:130] > #   deprecated option "conmon".
	I1209 04:35:40.313715 1614600 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 04:35:40.313721 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 04:35:40.313730 1614600 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 04:35:40.313735 1614600 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 04:35:40.313742 1614600 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1209 04:35:40.313752 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 04:35:40.313763 1614600 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1209 04:35:40.313771 1614600 command_runner.go:130] > #   conmon-rs by using:
	I1209 04:35:40.313779 1614600 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1209 04:35:40.313788 1614600 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1209 04:35:40.313799 1614600 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1209 04:35:40.313806 1614600 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 04:35:40.313811 1614600 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 04:35:40.313818 1614600 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1209 04:35:40.313825 1614600 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1209 04:35:40.313830 1614600 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1209 04:35:40.313842 1614600 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1209 04:35:40.313852 1614600 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1209 04:35:40.313860 1614600 command_runner.go:130] > #   when a machine crash happens.
	I1209 04:35:40.313868 1614600 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1209 04:35:40.313881 1614600 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1209 04:35:40.313889 1614600 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1209 04:35:40.313894 1614600 command_runner.go:130] > #   seccomp profile for the runtime.
	I1209 04:35:40.313900 1614600 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1209 04:35:40.313911 1614600 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1209 04:35:40.313915 1614600 command_runner.go:130] > #
	I1209 04:35:40.313919 1614600 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 04:35:40.313927 1614600 command_runner.go:130] > #
	I1209 04:35:40.313934 1614600 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 04:35:40.313942 1614600 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 04:35:40.313949 1614600 command_runner.go:130] > #
	I1209 04:35:40.313955 1614600 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 04:35:40.313962 1614600 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 04:35:40.313965 1614600 command_runner.go:130] > #
	I1209 04:35:40.313971 1614600 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 04:35:40.313974 1614600 command_runner.go:130] > # feature.
	I1209 04:35:40.313977 1614600 command_runner.go:130] > #
	I1209 04:35:40.313983 1614600 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 04:35:40.313992 1614600 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 04:35:40.314004 1614600 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 04:35:40.314014 1614600 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 04:35:40.314021 1614600 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 04:35:40.314029 1614600 command_runner.go:130] > #
	I1209 04:35:40.314036 1614600 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 04:35:40.314042 1614600 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 04:35:40.314045 1614600 command_runner.go:130] > #
	I1209 04:35:40.314051 1614600 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 04:35:40.314057 1614600 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 04:35:40.314063 1614600 command_runner.go:130] > #
	I1209 04:35:40.314070 1614600 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 04:35:40.314076 1614600 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 04:35:40.314083 1614600 command_runner.go:130] > # limitation.
	I1209 04:35:40.314088 1614600 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1209 04:35:40.314093 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1209 04:35:40.314104 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314108 1614600 command_runner.go:130] > runtime_root = "/run/crun"
	I1209 04:35:40.314112 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314120 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314124 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314130 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314134 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314138 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314142 1614600 command_runner.go:130] > allowed_annotations = [
	I1209 04:35:40.314152 1614600 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1209 04:35:40.314155 1614600 command_runner.go:130] > ]
	I1209 04:35:40.314159 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314164 1614600 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 04:35:40.314172 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1209 04:35:40.314177 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314181 1614600 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 04:35:40.314191 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314195 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314203 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314208 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314211 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314215 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314219 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314440 1614600 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 04:35:40.314455 1614600 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 04:35:40.314461 1614600 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 04:35:40.314470 1614600 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 04:35:40.314481 1614600 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1209 04:35:40.314491 1614600 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1209 04:35:40.314503 1614600 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1209 04:35:40.314509 1614600 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 04:35:40.314523 1614600 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 04:35:40.314532 1614600 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 04:35:40.314548 1614600 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 04:35:40.314556 1614600 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 04:35:40.314560 1614600 command_runner.go:130] > # Example:
	I1209 04:35:40.314565 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 04:35:40.314584 1614600 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 04:35:40.314596 1614600 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 04:35:40.314602 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 04:35:40.314611 1614600 command_runner.go:130] > # cpuset = "0-1"
	I1209 04:35:40.314615 1614600 command_runner.go:130] > # cpushares = "5"
	I1209 04:35:40.314619 1614600 command_runner.go:130] > # cpuquota = "1000"
	I1209 04:35:40.314623 1614600 command_runner.go:130] > # cpuperiod = "100000"
	I1209 04:35:40.314627 1614600 command_runner.go:130] > # cpulimit = "35"
	I1209 04:35:40.314630 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.314634 1614600 command_runner.go:130] > # The workload name is workload-type.
	I1209 04:35:40.314642 1614600 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 04:35:40.314651 1614600 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 04:35:40.314657 1614600 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 04:35:40.314665 1614600 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 04:35:40.314675 1614600 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 04:35:40.314680 1614600 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 04:35:40.314688 1614600 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 04:35:40.314695 1614600 command_runner.go:130] > # Default value is set to true
	I1209 04:35:40.314700 1614600 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 04:35:40.314706 1614600 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 04:35:40.314710 1614600 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 04:35:40.314715 1614600 command_runner.go:130] > # Default value is set to 'false'
	I1209 04:35:40.314719 1614600 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 04:35:40.314731 1614600 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1209 04:35:40.314740 1614600 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1209 04:35:40.314747 1614600 command_runner.go:130] > # timezone = ""
	I1209 04:35:40.314754 1614600 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 04:35:40.314757 1614600 command_runner.go:130] > #
	I1209 04:35:40.314763 1614600 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 04:35:40.314777 1614600 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1209 04:35:40.314781 1614600 command_runner.go:130] > [crio.image]
	I1209 04:35:40.314787 1614600 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 04:35:40.314791 1614600 command_runner.go:130] > # default_transport = "docker://"
	I1209 04:35:40.314797 1614600 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 04:35:40.314810 1614600 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314814 1614600 command_runner.go:130] > # global_auth_file = ""
	I1209 04:35:40.314819 1614600 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 04:35:40.314829 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314834 1614600 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.314841 1614600 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 04:35:40.314852 1614600 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314858 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314863 1614600 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 04:35:40.314868 1614600 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 04:35:40.314875 1614600 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 04:35:40.314888 1614600 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 04:35:40.314904 1614600 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 04:35:40.314909 1614600 command_runner.go:130] > # pause_command = "/pause"
	I1209 04:35:40.314915 1614600 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 04:35:40.314924 1614600 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 04:35:40.314931 1614600 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 04:35:40.314942 1614600 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 04:35:40.314949 1614600 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 04:35:40.314955 1614600 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 04:35:40.314959 1614600 command_runner.go:130] > # pinned_images = [
	I1209 04:35:40.314961 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.314968 1614600 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 04:35:40.314978 1614600 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 04:35:40.314984 1614600 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 04:35:40.314995 1614600 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 04:35:40.315001 1614600 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 04:35:40.315011 1614600 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1209 04:35:40.315023 1614600 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 04:35:40.315031 1614600 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 04:35:40.315037 1614600 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 04:35:40.315049 1614600 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 04:35:40.315055 1614600 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 04:35:40.315065 1614600 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 04:35:40.315071 1614600 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 04:35:40.315078 1614600 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 04:35:40.315086 1614600 command_runner.go:130] > # changing them here.
	I1209 04:35:40.315091 1614600 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1209 04:35:40.315095 1614600 command_runner.go:130] > # insecure_registries = [
	I1209 04:35:40.315099 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315108 1614600 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 04:35:40.315114 1614600 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 04:35:40.315319 1614600 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 04:35:40.315344 1614600 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 04:35:40.315350 1614600 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 04:35:40.315355 1614600 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1209 04:35:40.315362 1614600 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1209 04:35:40.315367 1614600 command_runner.go:130] > # auto_reload_registries = false
	I1209 04:35:40.315372 1614600 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1209 04:35:40.315381 1614600 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1209 04:35:40.315390 1614600 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1209 04:35:40.315399 1614600 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1209 04:35:40.315404 1614600 command_runner.go:130] > # The mode of short name resolution.
	I1209 04:35:40.315411 1614600 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1209 04:35:40.315422 1614600 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1209 04:35:40.315430 1614600 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1209 04:35:40.315434 1614600 command_runner.go:130] > # short_name_mode = "enforcing"
	I1209 04:35:40.315440 1614600 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1209 04:35:40.315446 1614600 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1209 04:35:40.315450 1614600 command_runner.go:130] > # oci_artifact_mount_support = true
	I1209 04:35:40.315456 1614600 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 04:35:40.315460 1614600 command_runner.go:130] > # CNI plugins.
	I1209 04:35:40.315463 1614600 command_runner.go:130] > [crio.network]
	I1209 04:35:40.315469 1614600 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 04:35:40.315475 1614600 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 04:35:40.315482 1614600 command_runner.go:130] > # cni_default_network = ""
	I1209 04:35:40.315488 1614600 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 04:35:40.315493 1614600 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 04:35:40.315503 1614600 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 04:35:40.315507 1614600 command_runner.go:130] > # plugin_dirs = [
	I1209 04:35:40.315515 1614600 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 04:35:40.315519 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315526 1614600 command_runner.go:130] > # List of included pod metrics.
	I1209 04:35:40.315530 1614600 command_runner.go:130] > # included_pod_metrics = [
	I1209 04:35:40.315533 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315539 1614600 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 04:35:40.315542 1614600 command_runner.go:130] > [crio.metrics]
	I1209 04:35:40.315547 1614600 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 04:35:40.315552 1614600 command_runner.go:130] > # enable_metrics = false
	I1209 04:35:40.315562 1614600 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 04:35:40.315567 1614600 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 04:35:40.315573 1614600 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 04:35:40.315587 1614600 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 04:35:40.315593 1614600 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 04:35:40.315601 1614600 command_runner.go:130] > # metrics_collectors = [
	I1209 04:35:40.315605 1614600 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 04:35:40.315610 1614600 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 04:35:40.315614 1614600 command_runner.go:130] > # 	"containers_oom_total",
	I1209 04:35:40.315617 1614600 command_runner.go:130] > # 	"processes_defunct",
	I1209 04:35:40.315621 1614600 command_runner.go:130] > # 	"operations_total",
	I1209 04:35:40.315626 1614600 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 04:35:40.315630 1614600 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 04:35:40.315635 1614600 command_runner.go:130] > # 	"operations_errors_total",
	I1209 04:35:40.315638 1614600 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 04:35:40.315642 1614600 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 04:35:40.315646 1614600 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 04:35:40.315651 1614600 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 04:35:40.315661 1614600 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 04:35:40.315666 1614600 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 04:35:40.315675 1614600 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 04:35:40.315849 1614600 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 04:35:40.315864 1614600 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1209 04:35:40.315868 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315880 1614600 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1209 04:35:40.315884 1614600 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1209 04:35:40.315889 1614600 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 04:35:40.315893 1614600 command_runner.go:130] > # metrics_port = 9090
	I1209 04:35:40.315899 1614600 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 04:35:40.315907 1614600 command_runner.go:130] > # metrics_socket = ""
	I1209 04:35:40.315912 1614600 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 04:35:40.315921 1614600 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 04:35:40.315929 1614600 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 04:35:40.315937 1614600 command_runner.go:130] > # certificate on any modification event.
	I1209 04:35:40.315944 1614600 command_runner.go:130] > # metrics_cert = ""
	I1209 04:35:40.315953 1614600 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 04:35:40.315959 1614600 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 04:35:40.315968 1614600 command_runner.go:130] > # metrics_key = ""
	I1209 04:35:40.315974 1614600 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 04:35:40.315982 1614600 command_runner.go:130] > [crio.tracing]
	I1209 04:35:40.315987 1614600 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 04:35:40.315996 1614600 command_runner.go:130] > # enable_tracing = false
	I1209 04:35:40.316002 1614600 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 04:35:40.316009 1614600 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1209 04:35:40.316017 1614600 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 04:35:40.316027 1614600 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 04:35:40.316032 1614600 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 04:35:40.316035 1614600 command_runner.go:130] > [crio.nri]
	I1209 04:35:40.316040 1614600 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 04:35:40.316043 1614600 command_runner.go:130] > # enable_nri = true
	I1209 04:35:40.316047 1614600 command_runner.go:130] > # NRI socket to listen on.
	I1209 04:35:40.316051 1614600 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 04:35:40.316055 1614600 command_runner.go:130] > # NRI plugin directory to use.
	I1209 04:35:40.316064 1614600 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 04:35:40.316069 1614600 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 04:35:40.316077 1614600 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 04:35:40.316083 1614600 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 04:35:40.316147 1614600 command_runner.go:130] > # nri_disable_connections = false
	I1209 04:35:40.316157 1614600 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 04:35:40.316162 1614600 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 04:35:40.316185 1614600 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 04:35:40.316193 1614600 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 04:35:40.316198 1614600 command_runner.go:130] > # NRI default validator configuration.
	I1209 04:35:40.316205 1614600 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1209 04:35:40.316215 1614600 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1209 04:35:40.316220 1614600 command_runner.go:130] > # can be restricted/rejected:
	I1209 04:35:40.316224 1614600 command_runner.go:130] > # - OCI hook injection
	I1209 04:35:40.316233 1614600 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1209 04:35:40.316238 1614600 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1209 04:35:40.316243 1614600 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1209 04:35:40.316247 1614600 command_runner.go:130] > # - adjustment of linux namespaces
	I1209 04:35:40.316254 1614600 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1209 04:35:40.316264 1614600 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1209 04:35:40.316271 1614600 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1209 04:35:40.316277 1614600 command_runner.go:130] > #
	I1209 04:35:40.316282 1614600 command_runner.go:130] > # [crio.nri.default_validator]
	I1209 04:35:40.316290 1614600 command_runner.go:130] > # nri_enable_default_validator = false
	I1209 04:35:40.316295 1614600 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1209 04:35:40.316307 1614600 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1209 04:35:40.316317 1614600 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1209 04:35:40.316322 1614600 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1209 04:35:40.316327 1614600 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1209 04:35:40.316480 1614600 command_runner.go:130] > # nri_validator_required_plugins = [
	I1209 04:35:40.316508 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.316521 1614600 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1209 04:35:40.316528 1614600 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 04:35:40.316540 1614600 command_runner.go:130] > [crio.stats]
	I1209 04:35:40.316546 1614600 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 04:35:40.316551 1614600 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 04:35:40.316555 1614600 command_runner.go:130] > # stats_collection_period = 0
	I1209 04:35:40.316562 1614600 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1209 04:35:40.316572 1614600 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1209 04:35:40.316577 1614600 command_runner.go:130] > # collection_period = 0
	I1209 04:35:40.318311 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282255082Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1209 04:35:40.318330 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.2822971Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1209 04:35:40.318340 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282328904Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1209 04:35:40.318349 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282355243Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1209 04:35:40.318358 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282430665Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:40.318367 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282713695Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1209 04:35:40.318382 1614600 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 04:35:40.318459 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:40.318484 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:40.318506 1614600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:35:40.318532 1614600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:35:40.318689 1614600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:35:40.318765 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:35:40.328360 1614600 command_runner.go:130] > kubeadm
	I1209 04:35:40.328381 1614600 command_runner.go:130] > kubectl
	I1209 04:35:40.328387 1614600 command_runner.go:130] > kubelet
	I1209 04:35:40.329285 1614600 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:35:40.329353 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:35:40.336944 1614600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:35:40.349970 1614600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:35:40.362809 1614600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1209 04:35:40.375503 1614600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:35:40.379345 1614600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:35:40.379778 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:40.502305 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:41.326409 1614600 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:35:41.326563 1614600 certs.go:195] generating shared ca certs ...
	I1209 04:35:41.326611 1614600 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:41.326833 1614600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:35:41.326887 1614600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:35:41.326895 1614600 certs.go:257] generating profile certs ...
	I1209 04:35:41.327067 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:35:41.327129 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:35:41.327233 1614600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:35:41.327250 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:35:41.327267 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:35:41.327279 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:35:41.327290 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:35:41.327349 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:35:41.327367 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:35:41.327413 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:35:41.327427 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:35:41.327509 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:35:41.327593 1614600 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:35:41.327604 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:35:41.327677 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:35:41.327750 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:35:41.327813 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:35:41.327913 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:41.327983 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.328001 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.328047 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.328720 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:35:41.349998 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:35:41.370613 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:35:41.391438 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:35:41.410483 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:35:41.429428 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:35:41.449234 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:35:41.468289 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:35:41.486148 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:35:41.504497 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:35:41.523111 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:35:41.542281 1614600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:35:41.555566 1614600 ssh_runner.go:195] Run: openssl version
	I1209 04:35:41.561986 1614600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:35:41.562090 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.569846 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:35:41.577817 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581778 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581849 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581927 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.622889 1614600 command_runner.go:130] > 51391683
	I1209 04:35:41.623441 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:35:41.630995 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.638454 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:35:41.646110 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649703 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649815 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649886 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.690940 1614600 command_runner.go:130] > 3ec20f2e
	I1209 04:35:41.691023 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:35:41.698710 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.705943 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:35:41.713451 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717157 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717250 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717310 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.757537 1614600 command_runner.go:130] > b5213941
	I1209 04:35:41.757976 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:35:41.765482 1614600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769213 1614600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769237 1614600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:35:41.769244 1614600 command_runner.go:130] > Device: 259,1	Inode: 1322432     Links: 1
	I1209 04:35:41.769251 1614600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:41.769256 1614600 command_runner.go:130] > Access: 2025-12-09 04:31:33.728838377 +0000
	I1209 04:35:41.769262 1614600 command_runner.go:130] > Modify: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769267 1614600 command_runner.go:130] > Change: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769272 1614600 command_runner.go:130] >  Birth: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769363 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:35:41.810027 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.810619 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:35:41.851168 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.851713 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:35:41.892758 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.892839 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:35:41.938176 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.938689 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:35:41.979665 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.980184 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:35:42.021167 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:42.021686 1614600 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:42.021825 1614600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:35:42.021936 1614600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:35:42.052115 1614600 cri.go:89] found id: ""
	I1209 04:35:42.052191 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:35:42.060116 1614600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:35:42.060196 1614600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:35:42.060220 1614600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:35:42.061227 1614600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:35:42.061247 1614600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:35:42.061342 1614600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:35:42.070417 1614600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:35:42.071064 1614600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.071256 1614600 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "functional-331811" cluster setting kubeconfig missing "functional-331811" context setting]
	I1209 04:35:42.071646 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.072159 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.072417 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.073140 1614600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:35:42.073224 1614600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:35:42.073266 1614600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:35:42.073391 1614600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:35:42.073418 1614600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:35:42.073437 1614600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:35:42.073813 1614600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:35:42.085766 1614600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:35:42.085868 1614600 kubeadm.go:602] duration metric: took 24.612846ms to restartPrimaryControlPlane
	I1209 04:35:42.085898 1614600 kubeadm.go:403] duration metric: took 64.220222ms to StartCluster
	I1209 04:35:42.085947 1614600 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.086095 1614600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.086834 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.087380 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:42.087524 1614600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:35:42.087628 1614600 addons.go:70] Setting storage-provisioner=true in profile "functional-331811"
	I1209 04:35:42.087691 1614600 addons.go:239] Setting addon storage-provisioner=true in "functional-331811"
	I1209 04:35:42.087740 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.088325 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.087482 1614600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:35:42.089019 1614600 addons.go:70] Setting default-storageclass=true in profile "functional-331811"
	I1209 04:35:42.089039 1614600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-331811"
	I1209 04:35:42.089353 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.092155 1614600 out.go:179] * Verifying Kubernetes components...
	I1209 04:35:42.095248 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:42.128430 1614600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:35:42.131623 1614600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.131651 1614600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:35:42.131731 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.147694 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.147902 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.148207 1614600 addons.go:239] Setting addon default-storageclass=true in "functional-331811"
	I1209 04:35:42.148248 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.148712 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.182846 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.193184 1614600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:42.193209 1614600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:35:42.193289 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.220341 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.327312 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:42.346850 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.376931 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.076226 1614600 node_ready.go:35] waiting up to 6m0s for node "functional-331811" to be "Ready" ...
	I1209 04:35:43.076344 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.076396 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.076607 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076635 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076655 1614600 retry.go:31] will retry after 310.700454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076685 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076702 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076708 1614600 retry.go:31] will retry after 282.763546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076773 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.360393 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.387801 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.432930 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.433022 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.433059 1614600 retry.go:31] will retry after 489.220325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460835 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.460941 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460967 1614600 retry.go:31] will retry after 355.931225ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.577252 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.577329 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.577711 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.817107 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.911473 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.915604 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.915640 1614600 retry.go:31] will retry after 537.488813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.922787 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.976592 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.980371 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.980407 1614600 retry.go:31] will retry after 753.380628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.077073 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.453574 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:44.512034 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.512090 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.512116 1614600 retry.go:31] will retry after 707.625417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.577247 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.577656 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.734008 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:44.795873 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.795936 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.795960 1614600 retry.go:31] will retry after 1.127913267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.077396 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.077480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.077910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:45.077993 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:45.220540 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:45.296909 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.296951 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.296996 1614600 retry.go:31] will retry after 917.152391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.577366 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.577441 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:45.924157 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:45.995176 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.995217 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.995239 1614600 retry.go:31] will retry after 1.420775217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.077446 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.077526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.077798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:46.215234 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:46.279745 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:46.279823 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.279850 1614600 retry.go:31] will retry after 1.336322791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.577242 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.577341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.577688 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.077361 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.077723 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.416255 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:47.477013 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.480365 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.480397 1614600 retry.go:31] will retry after 2.174557655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:47.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:47.617100 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:47.681529 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.681577 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.681598 1614600 retry.go:31] will retry after 3.276200411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:48.077115 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.077203 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.077555 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:48.577382 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.577481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.577821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.076798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.576988 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.655381 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:49.715000 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:49.715035 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:49.715054 1614600 retry.go:31] will retry after 3.337758974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:50.077421 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.077518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.077847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:50.077903 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:50.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.576630 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.576967 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:50.958720 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:51.022646 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:51.022681 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.022700 1614600 retry.go:31] will retry after 4.624703928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.077048 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.077142 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.077474 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:51.577259 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.577334 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.577661 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.076656 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:52.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:53.053753 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:53.077246 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.077324 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.077594 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:53.113242 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:53.113284 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.113306 1614600 retry.go:31] will retry after 2.734988542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.576526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.576551 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.577004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:54.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:55.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.076500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.076811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.576596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.648391 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:55.705094 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.708789 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.708820 1614600 retry.go:31] will retry after 6.736330921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.849034 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:55.918734 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.918780 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.918800 1614600 retry.go:31] will retry after 8.152075725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:56.077153 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.077636 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:56.577352 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.577427 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:56.577743 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:57.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.077499 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.077829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:57.576552 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.576635 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:59.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.076667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:59.077089 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:59.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.576805 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.076586 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.577002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.076587 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:01.576991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:02.077159 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.077237 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.077605 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:02.446164 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:02.502744 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:02.506462 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.506498 1614600 retry.go:31] will retry after 8.388840508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.576683 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.576758 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.577095 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.076604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.076977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.576704 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.576784 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.577119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:03.577179 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:04.071900 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:04.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.076869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:04.150537 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:04.154620 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.154650 1614600 retry.go:31] will retry after 8.078270125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.577310 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.577452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.577816 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.076556 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.576672 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.576950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:06.076647 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:06.077129 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:06.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.076823 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.076900 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.077209 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.577024 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.577097 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.577441 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:08.077262 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:08.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:08.577265 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.577344 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.577616 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.077482 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.077835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.576413 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.576503 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.076504 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.076593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.076887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.576575 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.576673 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.576991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:10.577053 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:10.895548 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:10.953462 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:10.957148 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:10.957180 1614600 retry.go:31] will retry after 18.757746695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:11.076395 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.076478 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.076772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.576513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.576815 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.076936 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.077309 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.233682 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:12.292817 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:12.296392 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.296423 1614600 retry.go:31] will retry after 20.023788924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.576943 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.577019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.577364 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:12.577421 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:13.077108 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.077239 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.077603 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:13.577256 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.577343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.077313 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.077412 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.077731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.576496 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.576774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:15.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:15.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:15.576474 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.076431 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.076783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.576609 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.576956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:17.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.077082 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.077457 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:17.077514 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:17.577068 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.577144 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.577409 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.077285 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.077383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.077755 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.076597 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.576602 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:19.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:20.076579 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.076658 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.076980 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:20.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.076594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.576638 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.576994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:22.077314 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.077388 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:22.077714 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:22.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.076595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.076934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.576705 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.577060 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.076759 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.076839 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.077254 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.576837 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.576916 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.577306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:24.577364 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:25.077118 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.077190 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.077463 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:25.577272 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.077487 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.077842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.576440 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.576779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:27.076863 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.076944 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.077310 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:27.077367 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:27.577163 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.577241 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.577580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.077311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.077379 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.577399 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.577473 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.577808 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.076514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.576577 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.576646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:29.576955 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:29.715418 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:29.773517 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:29.777518 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:29.777549 1614600 retry.go:31] will retry after 13.466249075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:30.077059 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.077150 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.077512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:30.577014 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.577100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.577433 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.077181 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.077268 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.077521 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.577348 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.577801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:31.577857 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:32.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.076806 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.077154 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:32.320502 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:32.377593 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:32.381870 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.381909 1614600 retry.go:31] will retry after 28.435049856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.577214 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.577283 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.577547 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.077429 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.077516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.077823 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.576632 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.576978 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:34.076485 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.076922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:34.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:34.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.576951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.076511 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.076628 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.076979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.076491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.076926 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.576535 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.576977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:36.577035 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:37.076803 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.076875 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.077215 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:37.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.577125 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.577459 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.077495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.077876 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.576668 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.576989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:39.076692 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.077121 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:39.077180 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:39.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.576532 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.576612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.077052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.576937 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:41.576987 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:42.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.076556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:42.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.576610 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.077002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.244488 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:43.308556 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:43.308599 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.308622 1614600 retry.go:31] will retry after 20.568808948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.577020 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.577099 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.577399 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:43.577456 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:44.077183 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.077280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.077609 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:44.577311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.577390 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.577747 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:45.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.076692 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.077821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1209 04:36:45.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.576880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:46.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.076531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.076837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:46.076889 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:46.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.576859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.076876 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.076949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.077253 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.577001 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.577079 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.577339 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:48.077087 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.077173 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.077495 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:48.077544 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:48.577135 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.577218 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.577531 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.077177 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.077507 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.577363 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.577806 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.076584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.576621 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.577013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:50.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:51.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.077123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:51.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.076970 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.077045 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.077314 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.577272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.577623 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:52.577685 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:53.076390 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.076468 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:53.577353 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.577714 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.076421 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.076508 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:55.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:55.077081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:55.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.577383 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.577451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.577701 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:57.076714 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.076787 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.077117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:57.077170 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:57.576491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.076535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.076850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.076574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.576972 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:59.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:00.076760 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.076863 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.077187 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.576907 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.576998 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.577391 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.817971 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:00.880147 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:00.880206 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:00.880224 1614600 retry.go:31] will retry after 16.46927575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:01.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.076543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.076797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:01.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.576960 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:02.076888 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.076961 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.077278 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:02.077329 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:02.576827 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.576905 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.577242 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.076885 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.077203 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.576886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.878560 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:37:03.937026 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940694 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940802 1614600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:04.077117 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.077194 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.077475 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:04.077526 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:04.577262 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.577353 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.577683 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.076432 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.076859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.576819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.576622 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.576698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.577017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:06.577081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:07.077327 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.077411 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.077929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:07.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.576583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.076696 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.077190 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.576870 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.576949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.577244 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:08.577297 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:09.077127 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.077205 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:09.577337 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.577415 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.577756 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.076539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.076863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:11.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:11.077056 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.576515 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.076918 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.077297 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.577105 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.577178 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.577483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:13.077233 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.077301 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.077597 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:13.077653 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:13.577407 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.577483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.577834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.076503 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.076512 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.076989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:15.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:16.076438 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.076844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:16.576547 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.576641 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.576975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.076966 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.077042 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.077390 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.349771 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:17.409388 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413192 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413302 1614600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:17.416242 1614600 out.go:179] * Enabled addons: 
	I1209 04:37:17.419770 1614600 addons.go:530] duration metric: took 1m35.33224358s for enable addons: enabled=[]
	I1209 04:37:17.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.576800 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:18.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.076914 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:18.076974 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:18.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.076683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:20.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.076704 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.077078 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:20.077138 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:20.576447 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.576514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.076557 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.076645 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:22.076971 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.077046 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.077320 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:22.077371 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:22.577119 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.577200 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.577508 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.077228 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.077302 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.077678 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.577301 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.577385 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.577646 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:24.077387 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.077467 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.077801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:24.077859 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:24.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.076516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.076845 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.576541 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.576634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.576434 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.576510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.576842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:26.576894 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:27.077363 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.077772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:27.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.076461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.076533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.576561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:29.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:29.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:29.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.576604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.576748 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.577148 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:31.577206 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:32.077358 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.077437 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.077778 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:32.576461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.076565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.576689 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.577020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:34.076719 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.076790 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.077129 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:34.077191 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:34.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.576554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.077045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.576555 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.576651 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.076520 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:36.576893 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:37.076700 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:37.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.576566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.576841 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:39.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:39.076952 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:39.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.076451 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.576873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:41.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.076918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:41.076977 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:41.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.576507 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.576804 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.076573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.076649 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.077059 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.576872 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.577233 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:43.077479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.077558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.077870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:43.077918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:43.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.076698 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.076780 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.077140 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.576789 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.576864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.576773 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.576852 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.577196 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:45.577268 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:46.076955 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.077032 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.077330 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:46.577091 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.577484 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.077355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.077435 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.077777 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.576343 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.576413 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.576709 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:48.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.076924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:48.076985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:48.576690 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.576786 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.577139 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.076493 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.076866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.576550 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.076580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.576825 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:50.576876 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:51.076471 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.076547 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.076833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:51.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.077018 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.077092 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.077387 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.577182 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.577265 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.577610 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:52.577668 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:53.077401 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.077481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.077828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:53.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.576594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.076956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.576530 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:55.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.076862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:55.076908 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:55.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.576971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.076686 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.076765 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.077127 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.576603 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.576676 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:57.077077 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.077153 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.077489 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:57.077549 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:57.577277 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.577362 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.076355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.076431 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.076691 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.576837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.076605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.576429 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.576509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:59.576883 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:00.076590 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.076684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.076994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:00.576859 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.576953 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.577331 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.077097 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.077171 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.077483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.577361 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.577744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:01.577806 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:02.076658 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.077088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:02.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.576881 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.076607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.076969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.577088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:04.076785 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.076860 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.077186 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:04.077249 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:04.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.576888 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.076606 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.077018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.576519 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.576866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.076961 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:06.576985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:07.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.076824 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.077084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:07.576464 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.076916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.576683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:09.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:09.077009 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:09.576680 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.576755 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.577084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.076530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.076842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.576484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:11.076596 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.076680 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:11.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:11.576395 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.576732 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.076887 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.076960 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.077284 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.577054 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:13.077220 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.077295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:13.077607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:13.577421 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.577504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.076618 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.076974 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.576647 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.576716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.577024 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.076742 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.076823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.077205 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.577458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:15.577510 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:16.076942 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.077018 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.077298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:16.577080 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.577154 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.577499 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.077222 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.077307 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.077621 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.577360 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.577430 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:17.577730 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:18.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.076948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:18.576659 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.576737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.577070 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.076847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.576553 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:20.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:20.077015 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:20.577408 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.577485 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.577743 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.076529 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.076872 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.576583 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.077118 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.077384 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:22.077433 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:22.577298 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.577383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.577762 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.076896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.076595 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.077017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.576721 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.577172 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:24.577228 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:25.076985 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.077057 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.077316 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:25.577081 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.577159 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.577525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.077428 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.077536 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.077886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.576422 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.576498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.576744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:27.076724 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.076800 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.077105 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:27.077166 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:27.576841 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.576921 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.577195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.576540 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.576965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.076687 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.077094 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:29.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:30.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.076902 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:30.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.576577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.076559 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.076951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:32.077036 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.077110 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.077432 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:32.077494 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:32.577246 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.577331 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.076404 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.076504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.076853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.577018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.076840 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:34.576950 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:35.076629 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.076710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.077057 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:35.576738 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.577124 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.076862 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.076938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.077291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.577100 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.577187 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.577528 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:36.577591 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:37.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.076511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.076779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:37.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.076577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.076985 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:39.076497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.076900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:39.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:39.576601 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.076482 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.076567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.076858 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.076880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.576426 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.576818 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:41.576870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:42.077124 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.077219 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:42.577244 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.577337 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.577684 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.077329 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.077428 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.077706 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.577356 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.577731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:43.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:44.077436 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.077527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.078002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:44.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.076652 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.077429 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.577123 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.577460 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:46.077241 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.077667 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:46.077724 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:46.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.576518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.076726 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.076801 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.077144 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.576574 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.576648 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.076715 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.077126 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.576849 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.576930 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.577268 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:48.577334 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:49.077051 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.077394 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:49.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.577582 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.077370 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.077454 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.077810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.576424 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.576502 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.576796 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:51.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.076910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:51.076969 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:51.576623 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.576749 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.577040 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.077085 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.077160 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.077422 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.577216 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.577295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.577613 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:53.077392 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.077475 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.077797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:53.077856 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:53.576362 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.576448 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.576718 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.076568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.576614 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.576695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.577055 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.076818 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.077132 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.576528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.576901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:55.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:56.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.077039 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:56.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.576717 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.076676 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.076750 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.077090 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.576453 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.576855 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:58.076528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:58.076991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:58.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.077015 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.576391 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.576721 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.076886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:00.577017 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:01.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.077021 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.077100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.577124 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.577217 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.577512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:02.577562 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:03.077323 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.077407 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.077775 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:03.576388 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.576462 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.576801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.076927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:05.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.076614 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.076965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:05.077020 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:05.576441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.576512 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.076541 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.076627 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.076963 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.576692 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.576772 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.577111 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:07.076853 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.076924 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.077177 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:07.077219 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:07.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.576580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.576924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.576669 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.576753 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.577117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:09.577174 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:10.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.076856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:10.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.576584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.576962 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.076664 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.077066 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.576687 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.576941 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:12.077176 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.077252 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:12.077711 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:12.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.576516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.576897 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.076570 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.076642 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.576510 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.576831 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:14.576881 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:15.076545 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.076624 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:15.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.076538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.076835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:16.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:17.076773 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.076853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.077193 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:17.576588 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.576661 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.576992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.076899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.576718 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:18.577182 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:19.076436 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.076822 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:19.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.576983 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:21.076622 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.077074 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:21.077128 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:21.576821 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.576903 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.577234 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.077298 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.077380 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.076525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.576738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:23.576788 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:24.076805 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.076886 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.077219 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:24.577078 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.577155 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.077571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.078098 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.576598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:25.576994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:26.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.076622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:26.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.076774 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.076849 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.077183 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.577039 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.577116 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.577462 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:27.577520 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:28.077111 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.077189 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:28.577185 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.577261 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.577578 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.077440 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.077520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.077849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.576538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:30.076538 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.076629 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.076998 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:30.077061 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:30.576517 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.076955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.576521 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.576916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:32.076880 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.076954 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.077270 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:32.077326 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:32.577067 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.577413 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.077278 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.077360 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.077744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.076561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.576597 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.576678 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.577003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:34.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:35.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.076882 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:35.576445 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.576524 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:37.076840 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.076915 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.077171 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:37.077211 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:37.576860 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.576938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.577250 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.077017 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.077094 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.077417 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.577139 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.577221 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.577485 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:39.077239 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.077314 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.077657 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:39.077722 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:39.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.576520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.576913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.576856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:41.576911 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:42.077042 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.077525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:42.577190 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.577607 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.077049 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:43.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:44.076789 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.076865 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.077206 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:44.576988 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.577058 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.577402 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.077482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.077593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.078175 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.577162 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.577633 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:45.577692 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:46.077289 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.077367 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.077631 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:46.577385 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.577458 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.577783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.076819 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.076895 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.077306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.577090 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.577430 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:48.077209 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.077287 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.077634 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:48.077694 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:48.576414 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.576492 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.576820 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.076429 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.076812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.076634 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.077027 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:50.576904 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:51.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.076954 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:51.576645 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.576720 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.577036 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.077317 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.077391 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.077666 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.576786 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:53.076496 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:53.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:53.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.576834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.076579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.576494 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.576894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.076446 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.076520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.076829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.576458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.576864 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:55.576922 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:56.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.077075 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:56.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.576684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.576957 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.076915 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.076989 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.077329 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.577142 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.577223 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.577545 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:57.577606 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:58.077310 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.077382 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:58.576398 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.576810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.076569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.576814 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:00.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.076707 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.077051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:00.077102 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:00.576782 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.576892 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.577341 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.077110 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.077469 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.577344 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:02.077046 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.077464 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:02.077524 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:02.577200 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.577280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.577554 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.077335 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.077410 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.077751 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.576927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.076986 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.576717 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.577167 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:04.577233 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:05.077000 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.077083 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.077407 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:05.577162 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.577240 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.577561 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.077371 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.077455 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.077846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.576686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.577045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:07.076867 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.076956 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.077237 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:07.077285 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:07.577031 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.577112 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.077143 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.077231 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.077595 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.577327 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.577403 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.577658 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.076843 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.576572 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.576654 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.577008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:09.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:10.076510 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.076592 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.076913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:10.576495 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.577096 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:11.577137 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:12.077236 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.077311 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.077690 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:12.576433 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.076474 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.076548 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.076826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.576589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.576934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:14.076645 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.076722 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:14.077105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:14.576455 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.076587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.076968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.576689 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.576770 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.577097 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.076527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.076791 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.576559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:16.576962 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:17.076730 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.076809 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.077145 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:17.576557 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:19.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.076498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:19.076870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:19.576490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:21.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.076558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.076898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:21.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:21.576671 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.576745 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.577092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.077101 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.077458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.577307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.577395 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.577780 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.076488 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.076566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.076905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.576667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:23.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:24.076716 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.076812 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.077201 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:24.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.577098 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.577427 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.077197 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.077272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.577396 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.577807 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:25.577866 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:26.076551 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.076646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:26.576462 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.576534 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.076897 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.077258 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.577061 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.577148 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:28.077203 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.077282 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.077580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:28.077625 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:28.576412 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.576489 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.576712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.576846 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.577180 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:30.577234 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:31.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.076979 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.077238 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:31.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.577496 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.077307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.077384 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.077722 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.576539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:33.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.076563 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.076911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:33.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:33.576529 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.576968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.076674 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.077041 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.576964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:35.076695 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.077151 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:35.077212 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:35.576777 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.576847 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.577114 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.076516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.076591 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.076925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.576862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.076779 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.076855 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.077112 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:37.576915 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:38.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.076570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:38.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.576523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.576653 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.576731 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.577063 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:39.577116 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:40.076444 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.076518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.076828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:40.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.576874 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.076569 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.077011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.576534 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.576619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:42.077395 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.077483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.077909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:42.078001 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:42.576664 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.576741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.577081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.076642 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.076576 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.076879 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.576597 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:44.576957 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:45.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.076615 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.077092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:45.576710 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.576785 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.577104 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.076542 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.076809 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:47.076788 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.076864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.077245 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:47.077300 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:47.576416 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.576497 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.576797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.076992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.576737 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.076532 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.076827 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.576585 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:49.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:50.076712 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.076793 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.077113 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:50.576457 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.576530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.576900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.076686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.077038 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.576760 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:51.577220 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:52.077315 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.077402 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.077698 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:52.576467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.576558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.576921 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.076656 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.076733 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.576420 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.576495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.576776 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:54.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:54.077005 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:54.576713 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.576789 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.577077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.076751 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.076830 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.077119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:56.076633 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:56.077057 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:56.576546 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.576885 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.076984 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.077287 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.577072 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.577156 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.577468 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:58.077202 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.077274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.077543 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:58.077586 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:58.577422 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.577500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.577833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.076973 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.576658 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.576742 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.577051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.076674 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.576861 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.576941 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.577298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:00.577372 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:01.077099 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.077168 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.077505 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:01.577309 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.577392 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.076372 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.076451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.076749 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.576406 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.576484 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:03.076591 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.076792 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.077195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:03.077250 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:03.576825 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.576906 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.577274 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.076812 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.076893 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.577138 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.577214 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.577536 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:05.077263 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.077343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.077665 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:05.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:05.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.576451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.576771 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.076554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.576878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.076816 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.077173 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.576865 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:07.576918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:08.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.076616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.077003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:08.576544 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.076893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.576500 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.576908 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:09.576964 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:10.076483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.076557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.076873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:10.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.077082 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.576454 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.576850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:12.077093 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.077172 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.077480 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:12.077535 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:12.577297 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.577374 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.577704 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.076405 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.076480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.076737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.076691 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:14.576999 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:15.076684 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.076776 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.077081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:15.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.576853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.577200 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.076920 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.576710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.577052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:16.577105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:17.076817 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:17.576383 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.576453 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.576788 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.076603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.076964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:18.577127 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:19.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.076761 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:19.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.077004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.576633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:21.076661 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.077147 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:21.077209 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:21.576890 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.576967 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.577291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.077436 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.577198 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.577279 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.577606 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.076378 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.076452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.076785 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.576403 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.576491 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:23.576864 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:24.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:24.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.076975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.576506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.576863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:25.576919 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:26.076427 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.076505 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:26.576569 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.076922 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.076997 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.077305 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.577103 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.577175 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.577550 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:27.577607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:28.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.077414 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.077671 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:28.577417 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.577513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.577846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.576905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:30.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.076966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:30.077050 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:30.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.576966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.076669 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.076744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.576502 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.576918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:32.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.077068 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.077435 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:32.077497 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:32.577199 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.577274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.577539 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.077339 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.077443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.077811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.576930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.076861 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.576657 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.577014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:34.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:35.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.076546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.076895 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:35.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.576553 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.577032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:37.076948 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.077019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.077352 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:37.077398 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:37.577132 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.577216 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.577592 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.077367 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.077444 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.077774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.576549 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.076517 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.076596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.576754 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.576834 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.577168 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:39.577222 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:40.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.076703 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.076991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:40.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.576891 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.577374 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.577738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:41.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:42.076410 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.076517 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.076959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:42.576665 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:43.076660 1614600 node_ready.go:38] duration metric: took 6m0.000391304s for node "functional-331811" to be "Ready" ...
	I1209 04:41:43.080060 1614600 out.go:203] 
	W1209 04:41:43.083006 1614600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:41:43.083030 1614600 out.go:285] * 
	W1209 04:41:43.085173 1614600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:41:43.088614 1614600 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:41:52 functional-331811 crio[5392]: time="2025-12-09T04:41:52.495480805Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1b07b985-4d88-4211-a868-754c2842560d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571076374Z" level=info msg="Checking image status: minikube-local-cache-test:functional-331811" id=7673cc8c-89c1-4a84-8a18-b3b039a63ff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571286165Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571344077Z" level=info msg="Image minikube-local-cache-test:functional-331811 not found" id=7673cc8c-89c1-4a84-8a18-b3b039a63ff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571451492Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-331811 found" id=7673cc8c-89c1-4a84-8a18-b3b039a63ff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.596017684Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-331811" id=5fc4b7b0-d579-4f99-9fe9-2cdeab8984cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.596202818Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-331811 not found" id=5fc4b7b0-d579-4f99-9fe9-2cdeab8984cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.596262707Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-331811 found" id=5fc4b7b0-d579-4f99-9fe9-2cdeab8984cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.62409553Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-331811" id=cdf26a85-05d7-478c-921b-c7cfd547e778 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.624254367Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-331811 not found" id=cdf26a85-05d7-478c-921b-c7cfd547e778 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.624315922Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-331811 found" id=cdf26a85-05d7-478c-921b-c7cfd547e778 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.60951842Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f4b4abbe-d973-456f-90ca-4da5f8983018 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.93936094Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=427106ed-a66d-43e6-a017-1cd0025c39fe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.939510652Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=427106ed-a66d-43e6-a017-1cd0025c39fe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.939551054Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=427106ed-a66d-43e6-a017-1cd0025c39fe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.635199405Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9d1fcc30-e692-4f4c-a0f4-5fadf0221e50 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.635350495Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9d1fcc30-e692-4f4c-a0f4-5fadf0221e50 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.635402557Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9d1fcc30-e692-4f4c-a0f4-5fadf0221e50 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.660171631Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d94e6396-8709-4670-b3c7-435cd438defe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.660323928Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=d94e6396-8709-4670-b3c7-435cd438defe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.660362107Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=d94e6396-8709-4670-b3c7-435cd438defe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.684198232Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=eb6e5bea-4040-4e55-bf42-8d86d17b24ec name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.684352704Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=eb6e5bea-4040-4e55-bf42-8d86d17b24ec name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.684396798Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=eb6e5bea-4040-4e55-bf42-8d86d17b24ec name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:56 functional-331811 crio[5392]: time="2025-12-09T04:41:56.235761601Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=02beed57-a79e-4bda-819d-650c188a8e7a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:41:57.769281    9454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:57.769929    9454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:57.771649    9454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:57.772213    9454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:41:57.773743    9454 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:41:57 up  9:24,  0 user,  load average: 0.59, 0.38, 0.77
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:41:55 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:55 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 09 04:41:55 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:55 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:55 functional-331811 kubelet[9321]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:55 functional-331811 kubelet[9321]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:55 functional-331811 kubelet[9321]: E1209 04:41:55.886311    9321 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:55 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:55 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:56 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 09 04:41:56 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:56 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:56 functional-331811 kubelet[9349]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:56 functional-331811 kubelet[9349]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:56 functional-331811 kubelet[9349]: E1209 04:41:56.647422    9349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:56 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:56 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:57 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 09 04:41:57 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:57 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:57 functional-331811 kubelet[9370]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:57 functional-331811 kubelet[9370]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:57 functional-331811 kubelet[9370]: E1209 04:41:57.383098    9370 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:57 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:57 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (423.508099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-331811 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-331811 get pods: exit status 1 (102.59065ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-331811 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (327.82313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 logs -n 25: (1.325747865s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-790468 image ls --format yaml --alsologtostderr                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls --format short --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh     │ functional-790468 ssh pgrep buildkitd                                                                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image   │ functional-790468 image ls --format json --alsologtostderr                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls --format table --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                            │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls                                                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete  │ -p functional-790468                                                                                                                              │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start   │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ start   │ -p functional-331811 --alsologtostderr -v=8                                                                                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:35 UTC │                     │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:latest                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add minikube-local-cache-test:functional-331811                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache delete minikube-local-cache-test:functional-331811                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl images                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ cache   │ functional-331811 cache reload                                                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ kubectl │ functional-331811 kubectl -- --context functional-331811 get pods                                                                                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:35:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:35:36.923741 1614600 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:35:36.923916 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.923926 1614600 out.go:374] Setting ErrFile to fd 2...
	I1209 04:35:36.923933 1614600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:35:36.924200 1614600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:35:36.924580 1614600 out.go:368] Setting JSON to false
	I1209 04:35:36.925424 1614600 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33477,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:35:36.925503 1614600 start.go:143] virtualization:  
	I1209 04:35:36.929063 1614600 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:35:36.932800 1614600 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:35:36.932938 1614600 notify.go:221] Checking for updates...
	I1209 04:35:36.938644 1614600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:35:36.941493 1614600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:36.944366 1614600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:35:36.947167 1614600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:35:36.949981 1614600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:35:36.953271 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:36.953380 1614600 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:35:36.980248 1614600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:35:36.980355 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.042703 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.032815271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.042820 1614600 docker.go:319] overlay module found
	I1209 04:35:37.045833 1614600 out.go:179] * Using the docker driver based on existing profile
	I1209 04:35:37.048621 1614600 start.go:309] selected driver: docker
	I1209 04:35:37.048647 1614600 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.048735 1614600 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:35:37.048847 1614600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:35:37.101945 1614600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:35:37.092778249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:35:37.102371 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:37.102446 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:37.102494 1614600 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:37.105799 1614600 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:35:37.108781 1614600 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:35:37.111778 1614600 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:35:37.114815 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:37.114886 1614600 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:35:37.114901 1614600 cache.go:65] Caching tarball of preloaded images
	I1209 04:35:37.114901 1614600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:35:37.114988 1614600 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:35:37.114998 1614600 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:35:37.115114 1614600 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:35:37.133782 1614600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:35:37.133805 1614600 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:35:37.133825 1614600 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:35:37.133858 1614600 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:35:37.133920 1614600 start.go:364] duration metric: took 38.638µs to acquireMachinesLock for "functional-331811"
	I1209 04:35:37.133944 1614600 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:35:37.133953 1614600 fix.go:54] fixHost starting: 
	I1209 04:35:37.134223 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:37.151389 1614600 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:35:37.151428 1614600 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:35:37.154776 1614600 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:35:37.154815 1614600 machine.go:94] provisionDockerMachine start ...
	I1209 04:35:37.154907 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.171646 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.171972 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.171985 1614600 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:35:37.327745 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.327810 1614600 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:35:37.327896 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.347228 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.347562 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.347574 1614600 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:35:37.512164 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:35:37.512262 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.529769 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:37.530100 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:37.530124 1614600 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:35:37.682808 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:35:37.682838 1614600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:35:37.682870 1614600 ubuntu.go:190] setting up certificates
	I1209 04:35:37.682895 1614600 provision.go:84] configureAuth start
	I1209 04:35:37.682958 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:37.700930 1614600 provision.go:143] copyHostCerts
	I1209 04:35:37.700976 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701008 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:35:37.701021 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:35:37.701094 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:35:37.701192 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701215 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:35:37.701230 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:35:37.701259 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:35:37.701304 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701324 1614600 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:35:37.701331 1614600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:35:37.701357 1614600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:35:37.701411 1614600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:35:37.907915 1614600 provision.go:177] copyRemoteCerts
	I1209 04:35:37.907981 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:35:37.908038 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:37.925118 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.031668 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:35:38.031745 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:35:38.051846 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:35:38.051953 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:35:38.075178 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:35:38.075249 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:35:38.102039 1614600 provision.go:87] duration metric: took 419.115897ms to configureAuth
	I1209 04:35:38.102117 1614600 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:35:38.102384 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:38.102539 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.125059 1614600 main.go:143] libmachine: Using SSH client type: native
	I1209 04:35:38.125376 1614600 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:35:38.125391 1614600 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:35:38.471803 1614600 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:35:38.471824 1614600 machine.go:97] duration metric: took 1.317001735s to provisionDockerMachine
	I1209 04:35:38.471836 1614600 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:35:38.471848 1614600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:35:38.471925 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:35:38.471961 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.490918 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.598660 1614600 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:35:38.602109 1614600 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1209 04:35:38.602129 1614600 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1209 04:35:38.602133 1614600 command_runner.go:130] > VERSION_ID="12"
	I1209 04:35:38.602137 1614600 command_runner.go:130] > VERSION="12 (bookworm)"
	I1209 04:35:38.602143 1614600 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1209 04:35:38.602146 1614600 command_runner.go:130] > ID=debian
	I1209 04:35:38.602151 1614600 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1209 04:35:38.602156 1614600 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1209 04:35:38.602162 1614600 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1209 04:35:38.602263 1614600 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:35:38.602312 1614600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:35:38.602329 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:35:38.602392 1614600 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:35:38.602478 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:35:38.602488 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:35:38.602561 1614600 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:35:38.602585 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> /etc/test/nested/copy/1580521/hosts
	I1209 04:35:38.602639 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:35:38.610143 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:38.627602 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:35:38.644510 1614600 start.go:296] duration metric: took 172.65884ms for postStartSetup
	I1209 04:35:38.644590 1614600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:35:38.644638 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.661666 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.763521 1614600 command_runner.go:130] > 14%
	I1209 04:35:38.763600 1614600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:35:38.767910 1614600 command_runner.go:130] > 169G
	I1209 04:35:38.768419 1614600 fix.go:56] duration metric: took 1.634462107s for fixHost
	I1209 04:35:38.768442 1614600 start.go:83] releasing machines lock for "functional-331811", held for 1.634508761s
	I1209 04:35:38.768510 1614600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:35:38.785686 1614600 ssh_runner.go:195] Run: cat /version.json
	I1209 04:35:38.785708 1614600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:35:38.785735 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.785760 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:38.812264 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.824669 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:38.938034 1614600 command_runner.go:130] > {"iso_version": "v1.37.0-1764843329-22032", "kicbase_version": "v0.0.48-1765184860-22066", "minikube_version": "v1.37.0", "commit": "27bcd52be11288bda2f9abde063aa47b22607695"}
	I1209 04:35:38.938167 1614600 ssh_runner.go:195] Run: systemctl --version
	I1209 04:35:39.026186 1614600 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1209 04:35:39.029038 1614600 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1209 04:35:39.029075 1614600 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1209 04:35:39.029143 1614600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:35:39.066886 1614600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 04:35:39.071437 1614600 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1209 04:35:39.071476 1614600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:35:39.071539 1614600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:35:39.079896 1614600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:35:39.079922 1614600 start.go:496] detecting cgroup driver to use...
	I1209 04:35:39.079956 1614600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:35:39.080020 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:35:39.095690 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:35:39.109020 1614600 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:35:39.109092 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:35:39.124696 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:35:39.138081 1614600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:35:39.247127 1614600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:35:39.364113 1614600 docker.go:234] disabling docker service ...
	I1209 04:35:39.364202 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:35:39.381227 1614600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:35:39.394458 1614600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:35:39.513409 1614600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:35:39.656760 1614600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:35:39.669700 1614600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:35:39.682849 1614600 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1209 04:35:39.684261 1614600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:35:39.684369 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.693327 1614600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:35:39.693420 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.702710 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.711893 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.720974 1614600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:35:39.729134 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.738010 1614600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.746818 1614600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:39.757592 1614600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:35:39.764510 1614600 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1209 04:35:39.765518 1614600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:35:39.773280 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:39.885186 1614600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:35:40.065444 1614600 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:35:40.065521 1614600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:35:40.069680 1614600 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1209 04:35:40.069719 1614600 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1209 04:35:40.069751 1614600 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1209 04:35:40.069764 1614600 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:40.069773 1614600 command_runner.go:130] > Access: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069780 1614600 command_runner.go:130] > Modify: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069788 1614600 command_runner.go:130] > Change: 2025-12-09 04:35:39.990981436 +0000
	I1209 04:35:40.069792 1614600 command_runner.go:130] >  Birth: -
	I1209 04:35:40.069850 1614600 start.go:564] Will wait 60s for crictl version
	I1209 04:35:40.069925 1614600 ssh_runner.go:195] Run: which crictl
	I1209 04:35:40.073554 1614600 command_runner.go:130] > /usr/local/bin/crictl
	I1209 04:35:40.073791 1614600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:35:40.095945 1614600 command_runner.go:130] > Version:  0.1.0
	I1209 04:35:40.096030 1614600 command_runner.go:130] > RuntimeName:  cri-o
	I1209 04:35:40.096051 1614600 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1209 04:35:40.096074 1614600 command_runner.go:130] > RuntimeApiVersion:  v1
	I1209 04:35:40.098378 1614600 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:35:40.098514 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.127067 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.127092 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.127099 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.127105 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.127110 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.127137 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.127156 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.127168 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.127172 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.127180 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.127185 1614600 command_runner.go:130] >      static
	I1209 04:35:40.127194 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.127198 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.127227 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.127238 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.127242 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.127250 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.127255 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.127262 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.127267 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.129252 1614600 ssh_runner.go:195] Run: crio --version
	I1209 04:35:40.157358 1614600 command_runner.go:130] > crio version 1.34.3
	I1209 04:35:40.157406 1614600 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1209 04:35:40.157412 1614600 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1209 04:35:40.157417 1614600 command_runner.go:130] >    GitTreeState:   dirty
	I1209 04:35:40.157423 1614600 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1209 04:35:40.157427 1614600 command_runner.go:130] >    GoVersion:      go1.24.6
	I1209 04:35:40.157432 1614600 command_runner.go:130] >    Compiler:       gc
	I1209 04:35:40.157472 1614600 command_runner.go:130] >    Platform:       linux/arm64
	I1209 04:35:40.157484 1614600 command_runner.go:130] >    Linkmode:       static
	I1209 04:35:40.157489 1614600 command_runner.go:130] >    BuildTags:
	I1209 04:35:40.157492 1614600 command_runner.go:130] >      static
	I1209 04:35:40.157496 1614600 command_runner.go:130] >      netgo
	I1209 04:35:40.157508 1614600 command_runner.go:130] >      osusergo
	I1209 04:35:40.157512 1614600 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1209 04:35:40.157516 1614600 command_runner.go:130] >      seccomp
	I1209 04:35:40.157547 1614600 command_runner.go:130] >      apparmor
	I1209 04:35:40.157557 1614600 command_runner.go:130] >      selinux
	I1209 04:35:40.157562 1614600 command_runner.go:130] >    LDFlags:          unknown
	I1209 04:35:40.157567 1614600 command_runner.go:130] >    SeccompEnabled:   true
	I1209 04:35:40.157573 1614600 command_runner.go:130] >    AppArmorEnabled:  false
	I1209 04:35:40.164627 1614600 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:35:40.167496 1614600 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:35:40.183934 1614600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:35:40.187985 1614600 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1209 04:35:40.188113 1614600 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:35:40.188232 1614600 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:35:40.188297 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.225616 1614600 command_runner.go:130] > {
	I1209 04:35:40.225636 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.225641 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225650 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.225655 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225670 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.225673 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225678 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225687 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.225695 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.225699 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225704 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.225711 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225716 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225719 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225723 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225729 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.225733 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225738 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.225742 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225751 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225760 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.225769 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.225773 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225777 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.225781 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225789 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225792 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225795 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225802 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.225806 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225811 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.225814 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225818 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225826 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.225835 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.225838 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225842 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.225847 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.225851 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225854 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225857 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225864 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.225868 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225872 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.225881 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225885 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225897 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.225905 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.225909 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225913 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.225916 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225920 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225923 1614600 command_runner.go:130] >       },
	I1209 04:35:40.225931 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.225936 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.225939 1614600 command_runner.go:130] >     },
	I1209 04:35:40.225942 1614600 command_runner.go:130] >     {
	I1209 04:35:40.225949 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.225953 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.225958 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.225961 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225965 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.225973 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.225981 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.225983 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.225987 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.225991 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.225995 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.225998 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226001 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226005 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226008 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226011 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226018 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.226021 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226027 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.226030 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226037 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226045 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.226054 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.226057 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226062 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.226065 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226069 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226072 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226076 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226080 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226082 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226085 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226092 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.226096 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226101 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.226104 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226108 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226115 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.226123 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.226126 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226130 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.226133 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226137 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226140 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226143 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226149 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.226153 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226159 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.226162 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226166 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226174 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.226196 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.226200 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226207 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.226210 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226214 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.226218 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226222 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226226 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.226228 1614600 command_runner.go:130] >     },
	I1209 04:35:40.226232 1614600 command_runner.go:130] >     {
	I1209 04:35:40.226238 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.226242 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.226246 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.226249 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226253 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.226261 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.226269 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.226273 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.226277 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.226280 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.226284 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.226288 1614600 command_runner.go:130] >       },
	I1209 04:35:40.226294 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.226297 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.226301 1614600 command_runner.go:130] >     }
	I1209 04:35:40.226303 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.226307 1614600 command_runner.go:130] > }
	I1209 04:35:40.228010 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.228035 1614600 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:35:40.228091 1614600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:35:40.253311 1614600 command_runner.go:130] > {
	I1209 04:35:40.253331 1614600 command_runner.go:130] >   "images":  [
	I1209 04:35:40.253335 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253349 1614600 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1209 04:35:40.253353 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253360 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1209 04:35:40.253363 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253367 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253375 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1209 04:35:40.253383 1614600 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1209 04:35:40.253386 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253391 1614600 command_runner.go:130] >       "size":  "111333938",
	I1209 04:35:40.253395 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253400 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253403 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253406 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253412 1614600 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1209 04:35:40.253416 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253421 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1209 04:35:40.253425 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253429 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253437 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1209 04:35:40.253445 1614600 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1209 04:35:40.253449 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253453 1614600 command_runner.go:130] >       "size":  "29037500",
	I1209 04:35:40.253457 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253463 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253466 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253469 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253476 1614600 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1209 04:35:40.253480 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253485 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1209 04:35:40.253489 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253492 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253500 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1209 04:35:40.253508 1614600 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1209 04:35:40.253515 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253519 1614600 command_runner.go:130] >       "size":  "74491780",
	I1209 04:35:40.253523 1614600 command_runner.go:130] >       "username":  "nonroot",
	I1209 04:35:40.253528 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253531 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253534 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253540 1614600 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1209 04:35:40.253544 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253549 1614600 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1209 04:35:40.253553 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253557 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253564 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1209 04:35:40.253571 1614600 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1209 04:35:40.253574 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253578 1614600 command_runner.go:130] >       "size":  "60857170",
	I1209 04:35:40.253581 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253585 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253592 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253600 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253604 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253607 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253611 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253617 1614600 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1209 04:35:40.253621 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253626 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1209 04:35:40.253629 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253633 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253641 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1209 04:35:40.253649 1614600 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1209 04:35:40.253651 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253655 1614600 command_runner.go:130] >       "size":  "84949999",
	I1209 04:35:40.253659 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253662 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253669 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253672 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253676 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253679 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253682 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253688 1614600 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1209 04:35:40.253691 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253698 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1209 04:35:40.253701 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253704 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253713 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1209 04:35:40.253721 1614600 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1209 04:35:40.253724 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253728 1614600 command_runner.go:130] >       "size":  "72170325",
	I1209 04:35:40.253731 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253735 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253738 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253742 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253745 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253748 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253751 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253758 1614600 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1209 04:35:40.253762 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253767 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1209 04:35:40.253770 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253773 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253781 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1209 04:35:40.253789 1614600 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1209 04:35:40.253792 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253795 1614600 command_runner.go:130] >       "size":  "74106775",
	I1209 04:35:40.253799 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253803 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253806 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253812 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253819 1614600 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1209 04:35:40.253823 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253828 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1209 04:35:40.253831 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253835 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253843 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1209 04:35:40.253860 1614600 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1209 04:35:40.253863 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253867 1614600 command_runner.go:130] >       "size":  "49822549",
	I1209 04:35:40.253870 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253874 1614600 command_runner.go:130] >         "value":  "0"
	I1209 04:35:40.253877 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253881 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253884 1614600 command_runner.go:130] >       "pinned":  false
	I1209 04:35:40.253887 1614600 command_runner.go:130] >     },
	I1209 04:35:40.253890 1614600 command_runner.go:130] >     {
	I1209 04:35:40.253896 1614600 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1209 04:35:40.253900 1614600 command_runner.go:130] >       "repoTags":  [
	I1209 04:35:40.253905 1614600 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.253908 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253912 1614600 command_runner.go:130] >       "repoDigests":  [
	I1209 04:35:40.253919 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1209 04:35:40.253926 1614600 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1209 04:35:40.253929 1614600 command_runner.go:130] >       ],
	I1209 04:35:40.253934 1614600 command_runner.go:130] >       "size":  "519884",
	I1209 04:35:40.253937 1614600 command_runner.go:130] >       "uid":  {
	I1209 04:35:40.253941 1614600 command_runner.go:130] >         "value":  "65535"
	I1209 04:35:40.253944 1614600 command_runner.go:130] >       },
	I1209 04:35:40.253948 1614600 command_runner.go:130] >       "username":  "",
	I1209 04:35:40.253952 1614600 command_runner.go:130] >       "pinned":  true
	I1209 04:35:40.253955 1614600 command_runner.go:130] >     }
	I1209 04:35:40.253958 1614600 command_runner.go:130] >   ]
	I1209 04:35:40.253965 1614600 command_runner.go:130] > }
	I1209 04:35:40.254095 1614600 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:35:40.254103 1614600 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:35:40.254110 1614600 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:35:40.254208 1614600 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:35:40.254292 1614600 ssh_runner.go:195] Run: crio config
	I1209 04:35:40.303771 1614600 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1209 04:35:40.303802 1614600 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1209 04:35:40.303810 1614600 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1209 04:35:40.303813 1614600 command_runner.go:130] > #
	I1209 04:35:40.303821 1614600 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1209 04:35:40.303827 1614600 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1209 04:35:40.303834 1614600 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1209 04:35:40.303844 1614600 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1209 04:35:40.303848 1614600 command_runner.go:130] > # reload'.
	I1209 04:35:40.303854 1614600 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1209 04:35:40.303865 1614600 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1209 04:35:40.303872 1614600 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1209 04:35:40.303882 1614600 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1209 04:35:40.303886 1614600 command_runner.go:130] > [crio]
	I1209 04:35:40.303892 1614600 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1209 04:35:40.303900 1614600 command_runner.go:130] > # containers images, in this directory.
	I1209 04:35:40.304039 1614600 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1209 04:35:40.304055 1614600 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1209 04:35:40.304161 1614600 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1209 04:35:40.304178 1614600 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1209 04:35:40.304429 1614600 command_runner.go:130] > # imagestore = ""
	I1209 04:35:40.304453 1614600 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1209 04:35:40.304461 1614600 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1209 04:35:40.304691 1614600 command_runner.go:130] > # storage_driver = "overlay"
	I1209 04:35:40.304703 1614600 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1209 04:35:40.304710 1614600 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1209 04:35:40.304804 1614600 command_runner.go:130] > # storage_option = [
	I1209 04:35:40.305009 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.305024 1614600 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1209 04:35:40.305032 1614600 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1209 04:35:40.305284 1614600 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1209 04:35:40.305301 1614600 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1209 04:35:40.305327 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1209 04:35:40.305337 1614600 command_runner.go:130] > # always happen on a node reboot
	I1209 04:35:40.305502 1614600 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1209 04:35:40.305532 1614600 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1209 04:35:40.305540 1614600 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1209 04:35:40.305547 1614600 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1209 04:35:40.305748 1614600 command_runner.go:130] > # version_file_persist = ""
	I1209 04:35:40.305764 1614600 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1209 04:35:40.305775 1614600 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1209 04:35:40.306057 1614600 command_runner.go:130] > # internal_wipe = true
	I1209 04:35:40.306082 1614600 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1209 04:35:40.306090 1614600 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1209 04:35:40.306271 1614600 command_runner.go:130] > # internal_repair = true
	I1209 04:35:40.306293 1614600 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1209 04:35:40.306300 1614600 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1209 04:35:40.306308 1614600 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1209 04:35:40.306632 1614600 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1209 04:35:40.306647 1614600 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1209 04:35:40.306651 1614600 command_runner.go:130] > [crio.api]
	I1209 04:35:40.306663 1614600 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1209 04:35:40.306916 1614600 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1209 04:35:40.306934 1614600 command_runner.go:130] > # IP address on which the stream server will listen.
	I1209 04:35:40.307148 1614600 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1209 04:35:40.307163 1614600 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1209 04:35:40.307169 1614600 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1209 04:35:40.307396 1614600 command_runner.go:130] > # stream_port = "0"
	I1209 04:35:40.307416 1614600 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1209 04:35:40.307661 1614600 command_runner.go:130] > # stream_enable_tls = false
	I1209 04:35:40.307682 1614600 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1209 04:35:40.307871 1614600 command_runner.go:130] > # stream_idle_timeout = ""
	I1209 04:35:40.307887 1614600 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1209 04:35:40.307900 1614600 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308079 1614600 command_runner.go:130] > # stream_tls_cert = ""
	I1209 04:35:40.308090 1614600 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1209 04:35:40.308097 1614600 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1209 04:35:40.308297 1614600 command_runner.go:130] > # stream_tls_key = ""
	I1209 04:35:40.308313 1614600 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1209 04:35:40.308326 1614600 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1209 04:35:40.308345 1614600 command_runner.go:130] > # automatically pick up the changes.
	I1209 04:35:40.308572 1614600 command_runner.go:130] > # stream_tls_ca = ""
	I1209 04:35:40.308610 1614600 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.308814 1614600 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1209 04:35:40.308835 1614600 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1209 04:35:40.309085 1614600 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1209 04:35:40.309103 1614600 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1209 04:35:40.309115 1614600 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1209 04:35:40.309119 1614600 command_runner.go:130] > [crio.runtime]
	I1209 04:35:40.309126 1614600 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1209 04:35:40.309132 1614600 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1209 04:35:40.309143 1614600 command_runner.go:130] > # "nofile=1024:2048"
	I1209 04:35:40.309150 1614600 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1209 04:35:40.309302 1614600 command_runner.go:130] > # default_ulimits = [
	I1209 04:35:40.309485 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.309504 1614600 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1209 04:35:40.309688 1614600 command_runner.go:130] > # no_pivot = false
	I1209 04:35:40.309706 1614600 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1209 04:35:40.309713 1614600 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1209 04:35:40.310551 1614600 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1209 04:35:40.310598 1614600 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1209 04:35:40.310608 1614600 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1209 04:35:40.310618 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310767 1614600 command_runner.go:130] > # conmon = ""
	I1209 04:35:40.310786 1614600 command_runner.go:130] > # Cgroup setting for conmon
	I1209 04:35:40.310795 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1209 04:35:40.310806 1614600 command_runner.go:130] > conmon_cgroup = "pod"
	I1209 04:35:40.310814 1614600 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1209 04:35:40.310835 1614600 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1209 04:35:40.310842 1614600 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1209 04:35:40.310849 1614600 command_runner.go:130] > # conmon_env = [
	I1209 04:35:40.310857 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310866 1614600 command_runner.go:130] > # Additional environment variables to set for all the
	I1209 04:35:40.310873 1614600 command_runner.go:130] > # containers. These are overridden if set in the
	I1209 04:35:40.310879 1614600 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1209 04:35:40.310886 1614600 command_runner.go:130] > # default_env = [
	I1209 04:35:40.310889 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.310895 1614600 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1209 04:35:40.310907 1614600 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1209 04:35:40.310914 1614600 command_runner.go:130] > # selinux = false
	I1209 04:35:40.310925 1614600 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1209 04:35:40.310933 1614600 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1209 04:35:40.310938 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310944 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.310954 1614600 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1209 04:35:40.310963 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.310968 1614600 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1209 04:35:40.310974 1614600 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1209 04:35:40.310984 1614600 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1209 04:35:40.310991 1614600 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1209 04:35:40.311002 1614600 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1209 04:35:40.311007 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311011 1614600 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1209 04:35:40.311017 1614600 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1209 04:35:40.311022 1614600 command_runner.go:130] > # the cgroup blockio controller.
	I1209 04:35:40.311028 1614600 command_runner.go:130] > # blockio_config_file = ""
	I1209 04:35:40.311035 1614600 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1209 04:35:40.311042 1614600 command_runner.go:130] > # blockio parameters.
	I1209 04:35:40.311046 1614600 command_runner.go:130] > # blockio_reload = false
	I1209 04:35:40.311059 1614600 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1209 04:35:40.311064 1614600 command_runner.go:130] > # irqbalance daemon.
	I1209 04:35:40.311073 1614600 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1209 04:35:40.311083 1614600 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1209 04:35:40.311091 1614600 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1209 04:35:40.311107 1614600 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1209 04:35:40.311272 1614600 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1209 04:35:40.311287 1614600 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1209 04:35:40.311293 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.311441 1614600 command_runner.go:130] > # rdt_config_file = ""
	I1209 04:35:40.311462 1614600 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1209 04:35:40.311467 1614600 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1209 04:35:40.311477 1614600 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1209 04:35:40.311487 1614600 command_runner.go:130] > # separate_pull_cgroup = ""
	I1209 04:35:40.311493 1614600 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1209 04:35:40.311505 1614600 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1209 04:35:40.311514 1614600 command_runner.go:130] > # will be added.
	I1209 04:35:40.311522 1614600 command_runner.go:130] > # default_capabilities = [
	I1209 04:35:40.311525 1614600 command_runner.go:130] > # 	"CHOWN",
	I1209 04:35:40.311531 1614600 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1209 04:35:40.311535 1614600 command_runner.go:130] > # 	"FSETID",
	I1209 04:35:40.311541 1614600 command_runner.go:130] > # 	"FOWNER",
	I1209 04:35:40.311545 1614600 command_runner.go:130] > # 	"SETGID",
	I1209 04:35:40.311548 1614600 command_runner.go:130] > # 	"SETUID",
	I1209 04:35:40.311573 1614600 command_runner.go:130] > # 	"SETPCAP",
	I1209 04:35:40.311581 1614600 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1209 04:35:40.311585 1614600 command_runner.go:130] > # 	"KILL",
	I1209 04:35:40.311752 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311769 1614600 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1209 04:35:40.311777 1614600 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1209 04:35:40.311784 1614600 command_runner.go:130] > # add_inheritable_capabilities = false
	I1209 04:35:40.311790 1614600 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1209 04:35:40.311796 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311802 1614600 command_runner.go:130] > default_sysctls = [
	I1209 04:35:40.311807 1614600 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1209 04:35:40.311811 1614600 command_runner.go:130] > ]
	I1209 04:35:40.311823 1614600 command_runner.go:130] > # List of devices on the host that a
	I1209 04:35:40.311829 1614600 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1209 04:35:40.311833 1614600 command_runner.go:130] > # allowed_devices = [
	I1209 04:35:40.311843 1614600 command_runner.go:130] > # 	"/dev/fuse",
	I1209 04:35:40.311847 1614600 command_runner.go:130] > # 	"/dev/net/tun",
	I1209 04:35:40.311851 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311856 1614600 command_runner.go:130] > # List of additional devices. specified as
	I1209 04:35:40.311863 1614600 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1209 04:35:40.311870 1614600 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1209 04:35:40.311876 1614600 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1209 04:35:40.311883 1614600 command_runner.go:130] > # additional_devices = [
	I1209 04:35:40.311886 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311896 1614600 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1209 04:35:40.311900 1614600 command_runner.go:130] > # cdi_spec_dirs = [
	I1209 04:35:40.311903 1614600 command_runner.go:130] > # 	"/etc/cdi",
	I1209 04:35:40.311908 1614600 command_runner.go:130] > # 	"/var/run/cdi",
	I1209 04:35:40.311916 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311923 1614600 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1209 04:35:40.311929 1614600 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1209 04:35:40.311936 1614600 command_runner.go:130] > # Defaults to false.
	I1209 04:35:40.311942 1614600 command_runner.go:130] > # device_ownership_from_security_context = false
	I1209 04:35:40.311958 1614600 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1209 04:35:40.311969 1614600 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1209 04:35:40.311973 1614600 command_runner.go:130] > # hooks_dir = [
	I1209 04:35:40.311980 1614600 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1209 04:35:40.311986 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.311992 1614600 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1209 04:35:40.312007 1614600 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1209 04:35:40.312013 1614600 command_runner.go:130] > # its default mounts from the following two files:
	I1209 04:35:40.312021 1614600 command_runner.go:130] > #
	I1209 04:35:40.312027 1614600 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1209 04:35:40.312034 1614600 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1209 04:35:40.312039 1614600 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1209 04:35:40.312045 1614600 command_runner.go:130] > #
	I1209 04:35:40.312051 1614600 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1209 04:35:40.312057 1614600 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1209 04:35:40.312065 1614600 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1209 04:35:40.312074 1614600 command_runner.go:130] > #      only add mounts it finds in this file.
	I1209 04:35:40.312077 1614600 command_runner.go:130] > #
	I1209 04:35:40.312081 1614600 command_runner.go:130] > # default_mounts_file = ""
	I1209 04:35:40.312087 1614600 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1209 04:35:40.312097 1614600 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1209 04:35:40.312102 1614600 command_runner.go:130] > # pids_limit = -1
	I1209 04:35:40.312108 1614600 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1209 04:35:40.312120 1614600 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1209 04:35:40.312128 1614600 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1209 04:35:40.312137 1614600 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1209 04:35:40.312275 1614600 command_runner.go:130] > # log_size_max = -1
	I1209 04:35:40.312297 1614600 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1209 04:35:40.312305 1614600 command_runner.go:130] > # log_to_journald = false
	I1209 04:35:40.312312 1614600 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1209 04:35:40.312322 1614600 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1209 04:35:40.312328 1614600 command_runner.go:130] > # Path to directory for container attach sockets.
	I1209 04:35:40.312333 1614600 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1209 04:35:40.312338 1614600 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1209 04:35:40.312345 1614600 command_runner.go:130] > # bind_mount_prefix = ""
	I1209 04:35:40.312351 1614600 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1209 04:35:40.312355 1614600 command_runner.go:130] > # read_only = false
	I1209 04:35:40.312361 1614600 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1209 04:35:40.312373 1614600 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1209 04:35:40.312378 1614600 command_runner.go:130] > # live configuration reload.
	I1209 04:35:40.312551 1614600 command_runner.go:130] > # log_level = "info"
	I1209 04:35:40.312568 1614600 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1209 04:35:40.312574 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.312578 1614600 command_runner.go:130] > # log_filter = ""
	I1209 04:35:40.312588 1614600 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312594 1614600 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1209 04:35:40.312600 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312614 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312622 1614600 command_runner.go:130] > # uid_mappings = ""
	I1209 04:35:40.312629 1614600 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1209 04:35:40.312635 1614600 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1209 04:35:40.312644 1614600 command_runner.go:130] > # separated by comma.
	I1209 04:35:40.312652 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312657 1614600 command_runner.go:130] > # gid_mappings = ""
	I1209 04:35:40.312663 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1209 04:35:40.312670 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312676 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312689 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312694 1614600 command_runner.go:130] > # minimum_mappable_uid = -1
	I1209 04:35:40.312706 1614600 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1209 04:35:40.312713 1614600 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1209 04:35:40.312719 1614600 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1209 04:35:40.312730 1614600 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1209 04:35:40.312735 1614600 command_runner.go:130] > # minimum_mappable_gid = -1
	I1209 04:35:40.312745 1614600 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1209 04:35:40.312753 1614600 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1209 04:35:40.312759 1614600 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1209 04:35:40.312763 1614600 command_runner.go:130] > # ctr_stop_timeout = 30
	I1209 04:35:40.312771 1614600 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1209 04:35:40.312781 1614600 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1209 04:35:40.312787 1614600 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1209 04:35:40.312792 1614600 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1209 04:35:40.312800 1614600 command_runner.go:130] > # drop_infra_ctr = true
	I1209 04:35:40.312807 1614600 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1209 04:35:40.312813 1614600 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1209 04:35:40.312825 1614600 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1209 04:35:40.312831 1614600 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1209 04:35:40.312838 1614600 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1209 04:35:40.312846 1614600 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1209 04:35:40.312852 1614600 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1209 04:35:40.312863 1614600 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1209 04:35:40.312871 1614600 command_runner.go:130] > # shared_cpuset = ""
	I1209 04:35:40.312877 1614600 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1209 04:35:40.312882 1614600 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1209 04:35:40.312891 1614600 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1209 04:35:40.312899 1614600 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1209 04:35:40.312903 1614600 command_runner.go:130] > # pinns_path = ""
	I1209 04:35:40.312908 1614600 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1209 04:35:40.312919 1614600 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1209 04:35:40.312924 1614600 command_runner.go:130] > # enable_criu_support = true
	I1209 04:35:40.312929 1614600 command_runner.go:130] > # Enable/disable the generation of the container,
	I1209 04:35:40.312936 1614600 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1209 04:35:40.312940 1614600 command_runner.go:130] > # enable_pod_events = false
	I1209 04:35:40.312948 1614600 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1209 04:35:40.312957 1614600 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1209 04:35:40.312962 1614600 command_runner.go:130] > # default_runtime = "crun"
	I1209 04:35:40.312967 1614600 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1209 04:35:40.312984 1614600 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1209 04:35:40.312997 1614600 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1209 04:35:40.313003 1614600 command_runner.go:130] > # creation as a file is not desired either.
	I1209 04:35:40.313011 1614600 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1209 04:35:40.313018 1614600 command_runner.go:130] > # the hostname is being managed dynamically.
	I1209 04:35:40.313023 1614600 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1209 04:35:40.313241 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.313258 1614600 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1209 04:35:40.313265 1614600 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1209 04:35:40.313271 1614600 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1209 04:35:40.313279 1614600 command_runner.go:130] > # Each entry in the table should follow the format:
	I1209 04:35:40.313282 1614600 command_runner.go:130] > #
	I1209 04:35:40.313287 1614600 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1209 04:35:40.313298 1614600 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1209 04:35:40.313303 1614600 command_runner.go:130] > # runtime_type = "oci"
	I1209 04:35:40.313307 1614600 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1209 04:35:40.313320 1614600 command_runner.go:130] > # inherit_default_runtime = false
	I1209 04:35:40.313326 1614600 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1209 04:35:40.313335 1614600 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1209 04:35:40.313340 1614600 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1209 04:35:40.313344 1614600 command_runner.go:130] > # monitor_env = []
	I1209 04:35:40.313349 1614600 command_runner.go:130] > # privileged_without_host_devices = false
	I1209 04:35:40.313353 1614600 command_runner.go:130] > # allowed_annotations = []
	I1209 04:35:40.313359 1614600 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1209 04:35:40.313365 1614600 command_runner.go:130] > # no_sync_log = false
	I1209 04:35:40.313369 1614600 command_runner.go:130] > # default_annotations = {}
	I1209 04:35:40.313373 1614600 command_runner.go:130] > # stream_websockets = false
	I1209 04:35:40.313377 1614600 command_runner.go:130] > # seccomp_profile = ""
	I1209 04:35:40.313410 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.313420 1614600 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1209 04:35:40.313427 1614600 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1209 04:35:40.313440 1614600 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1209 04:35:40.313446 1614600 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1209 04:35:40.313450 1614600 command_runner.go:130] > #   in $PATH.
	I1209 04:35:40.313457 1614600 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1209 04:35:40.313465 1614600 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1209 04:35:40.313471 1614600 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1209 04:35:40.313477 1614600 command_runner.go:130] > #   state.
	I1209 04:35:40.313484 1614600 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1209 04:35:40.313498 1614600 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1209 04:35:40.313505 1614600 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1209 04:35:40.313515 1614600 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1209 04:35:40.313521 1614600 command_runner.go:130] > #   the values from the default runtime on load time.
	I1209 04:35:40.313528 1614600 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1209 04:35:40.313537 1614600 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1209 04:35:40.313543 1614600 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1209 04:35:40.313550 1614600 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1209 04:35:40.313558 1614600 command_runner.go:130] > #   The currently recognized values are:
	I1209 04:35:40.313565 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1209 04:35:40.313575 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1209 04:35:40.313584 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1209 04:35:40.313591 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1209 04:35:40.313599 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1209 04:35:40.313611 1614600 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1209 04:35:40.313618 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1209 04:35:40.313632 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1209 04:35:40.313638 1614600 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1209 04:35:40.313644 1614600 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1209 04:35:40.313651 1614600 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1209 04:35:40.313662 1614600 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1209 04:35:40.313668 1614600 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1209 04:35:40.313674 1614600 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1209 04:35:40.313684 1614600 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1209 04:35:40.313693 1614600 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1209 04:35:40.313703 1614600 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1209 04:35:40.313707 1614600 command_runner.go:130] > #   deprecated option "conmon".
	I1209 04:35:40.313715 1614600 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1209 04:35:40.313721 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1209 04:35:40.313730 1614600 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1209 04:35:40.313735 1614600 command_runner.go:130] > #   should be moved to the container's cgroup
	I1209 04:35:40.313742 1614600 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1209 04:35:40.313752 1614600 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1209 04:35:40.313763 1614600 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1209 04:35:40.313771 1614600 command_runner.go:130] > #   conmon-rs by using:
	I1209 04:35:40.313779 1614600 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1209 04:35:40.313788 1614600 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1209 04:35:40.313799 1614600 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1209 04:35:40.313806 1614600 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1209 04:35:40.313811 1614600 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1209 04:35:40.313818 1614600 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1209 04:35:40.313825 1614600 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1209 04:35:40.313830 1614600 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1209 04:35:40.313842 1614600 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1209 04:35:40.313852 1614600 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1209 04:35:40.313860 1614600 command_runner.go:130] > #   when a machine crash happens.
	I1209 04:35:40.313868 1614600 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1209 04:35:40.313881 1614600 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1209 04:35:40.313889 1614600 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1209 04:35:40.313894 1614600 command_runner.go:130] > #   seccomp profile for the runtime.
	I1209 04:35:40.313900 1614600 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1209 04:35:40.313911 1614600 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1209 04:35:40.313915 1614600 command_runner.go:130] > #
	I1209 04:35:40.313919 1614600 command_runner.go:130] > # Using the seccomp notifier feature:
	I1209 04:35:40.313927 1614600 command_runner.go:130] > #
	I1209 04:35:40.313934 1614600 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1209 04:35:40.313942 1614600 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1209 04:35:40.313949 1614600 command_runner.go:130] > #
	I1209 04:35:40.313955 1614600 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1209 04:35:40.313962 1614600 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1209 04:35:40.313965 1614600 command_runner.go:130] > #
	I1209 04:35:40.313971 1614600 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1209 04:35:40.313974 1614600 command_runner.go:130] > # feature.
	I1209 04:35:40.313977 1614600 command_runner.go:130] > #
	I1209 04:35:40.313983 1614600 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1209 04:35:40.313992 1614600 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1209 04:35:40.314004 1614600 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1209 04:35:40.314014 1614600 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1209 04:35:40.314021 1614600 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1209 04:35:40.314029 1614600 command_runner.go:130] > #
	I1209 04:35:40.314036 1614600 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1209 04:35:40.314042 1614600 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1209 04:35:40.314045 1614600 command_runner.go:130] > #
	I1209 04:35:40.314051 1614600 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1209 04:35:40.314057 1614600 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1209 04:35:40.314063 1614600 command_runner.go:130] > #
	I1209 04:35:40.314070 1614600 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1209 04:35:40.314076 1614600 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1209 04:35:40.314083 1614600 command_runner.go:130] > # limitation.
	I1209 04:35:40.314088 1614600 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1209 04:35:40.314093 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1209 04:35:40.314104 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314108 1614600 command_runner.go:130] > runtime_root = "/run/crun"
	I1209 04:35:40.314112 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314120 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314124 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314130 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314134 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314138 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314142 1614600 command_runner.go:130] > allowed_annotations = [
	I1209 04:35:40.314152 1614600 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1209 04:35:40.314155 1614600 command_runner.go:130] > ]
	I1209 04:35:40.314159 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314164 1614600 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1209 04:35:40.314172 1614600 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1209 04:35:40.314177 1614600 command_runner.go:130] > runtime_type = ""
	I1209 04:35:40.314181 1614600 command_runner.go:130] > runtime_root = "/run/runc"
	I1209 04:35:40.314191 1614600 command_runner.go:130] > inherit_default_runtime = false
	I1209 04:35:40.314195 1614600 command_runner.go:130] > runtime_config_path = ""
	I1209 04:35:40.314203 1614600 command_runner.go:130] > container_min_memory = ""
	I1209 04:35:40.314208 1614600 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1209 04:35:40.314211 1614600 command_runner.go:130] > monitor_cgroup = "pod"
	I1209 04:35:40.314215 1614600 command_runner.go:130] > monitor_exec_cgroup = ""
	I1209 04:35:40.314219 1614600 command_runner.go:130] > privileged_without_host_devices = false
	I1209 04:35:40.314440 1614600 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1209 04:35:40.314455 1614600 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1209 04:35:40.314461 1614600 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1209 04:35:40.314470 1614600 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1209 04:35:40.314481 1614600 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1209 04:35:40.314491 1614600 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1209 04:35:40.314503 1614600 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1209 04:35:40.314509 1614600 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1209 04:35:40.314523 1614600 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1209 04:35:40.314532 1614600 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1209 04:35:40.314548 1614600 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1209 04:35:40.314556 1614600 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1209 04:35:40.314560 1614600 command_runner.go:130] > # Example:
	I1209 04:35:40.314565 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1209 04:35:40.314584 1614600 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1209 04:35:40.314596 1614600 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1209 04:35:40.314602 1614600 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1209 04:35:40.314611 1614600 command_runner.go:130] > # cpuset = "0-1"
	I1209 04:35:40.314615 1614600 command_runner.go:130] > # cpushares = "5"
	I1209 04:35:40.314619 1614600 command_runner.go:130] > # cpuquota = "1000"
	I1209 04:35:40.314623 1614600 command_runner.go:130] > # cpuperiod = "100000"
	I1209 04:35:40.314627 1614600 command_runner.go:130] > # cpulimit = "35"
	I1209 04:35:40.314630 1614600 command_runner.go:130] > # Where:
	I1209 04:35:40.314634 1614600 command_runner.go:130] > # The workload name is workload-type.
	I1209 04:35:40.314642 1614600 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1209 04:35:40.314651 1614600 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1209 04:35:40.314657 1614600 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1209 04:35:40.314665 1614600 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1209 04:35:40.314675 1614600 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1209 04:35:40.314680 1614600 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1209 04:35:40.314688 1614600 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1209 04:35:40.314695 1614600 command_runner.go:130] > # Default value is set to true
	I1209 04:35:40.314700 1614600 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1209 04:35:40.314706 1614600 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1209 04:35:40.314710 1614600 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1209 04:35:40.314715 1614600 command_runner.go:130] > # Default value is set to 'false'
	I1209 04:35:40.314719 1614600 command_runner.go:130] > # disable_hostport_mapping = false
	I1209 04:35:40.314731 1614600 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1209 04:35:40.314740 1614600 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1209 04:35:40.314747 1614600 command_runner.go:130] > # timezone = ""
	I1209 04:35:40.314754 1614600 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1209 04:35:40.314757 1614600 command_runner.go:130] > #
	I1209 04:35:40.314763 1614600 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1209 04:35:40.314777 1614600 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1209 04:35:40.314781 1614600 command_runner.go:130] > [crio.image]
	I1209 04:35:40.314787 1614600 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1209 04:35:40.314791 1614600 command_runner.go:130] > # default_transport = "docker://"
	I1209 04:35:40.314797 1614600 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1209 04:35:40.314810 1614600 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314814 1614600 command_runner.go:130] > # global_auth_file = ""
	I1209 04:35:40.314819 1614600 command_runner.go:130] > # The image used to instantiate infra containers.
	I1209 04:35:40.314829 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314834 1614600 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1209 04:35:40.314841 1614600 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1209 04:35:40.314852 1614600 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1209 04:35:40.314858 1614600 command_runner.go:130] > # This option supports live configuration reload.
	I1209 04:35:40.314863 1614600 command_runner.go:130] > # pause_image_auth_file = ""
	I1209 04:35:40.314868 1614600 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1209 04:35:40.314875 1614600 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1209 04:35:40.314888 1614600 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1209 04:35:40.314904 1614600 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1209 04:35:40.314909 1614600 command_runner.go:130] > # pause_command = "/pause"
	I1209 04:35:40.314915 1614600 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1209 04:35:40.314924 1614600 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1209 04:35:40.314931 1614600 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1209 04:35:40.314942 1614600 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1209 04:35:40.314949 1614600 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1209 04:35:40.314955 1614600 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1209 04:35:40.314959 1614600 command_runner.go:130] > # pinned_images = [
	I1209 04:35:40.314961 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.314968 1614600 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1209 04:35:40.314978 1614600 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1209 04:35:40.314984 1614600 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1209 04:35:40.314995 1614600 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1209 04:35:40.315001 1614600 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1209 04:35:40.315011 1614600 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1209 04:35:40.315023 1614600 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1209 04:35:40.315031 1614600 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1209 04:35:40.315037 1614600 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1209 04:35:40.315049 1614600 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1209 04:35:40.315055 1614600 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1209 04:35:40.315065 1614600 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1209 04:35:40.315071 1614600 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1209 04:35:40.315078 1614600 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1209 04:35:40.315086 1614600 command_runner.go:130] > # changing them here.
	I1209 04:35:40.315091 1614600 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1209 04:35:40.315095 1614600 command_runner.go:130] > # insecure_registries = [
	I1209 04:35:40.315099 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315108 1614600 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1209 04:35:40.315114 1614600 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1209 04:35:40.315319 1614600 command_runner.go:130] > # image_volumes = "mkdir"
	I1209 04:35:40.315344 1614600 command_runner.go:130] > # Temporary directory to use for storing big files
	I1209 04:35:40.315350 1614600 command_runner.go:130] > # big_files_temporary_dir = ""
	I1209 04:35:40.315355 1614600 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1209 04:35:40.315362 1614600 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1209 04:35:40.315367 1614600 command_runner.go:130] > # auto_reload_registries = false
	I1209 04:35:40.315372 1614600 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1209 04:35:40.315381 1614600 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1209 04:35:40.315390 1614600 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1209 04:35:40.315399 1614600 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1209 04:35:40.315404 1614600 command_runner.go:130] > # The mode of short name resolution.
	I1209 04:35:40.315411 1614600 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1209 04:35:40.315422 1614600 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1209 04:35:40.315430 1614600 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1209 04:35:40.315434 1614600 command_runner.go:130] > # short_name_mode = "enforcing"
	I1209 04:35:40.315440 1614600 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1209 04:35:40.315446 1614600 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1209 04:35:40.315450 1614600 command_runner.go:130] > # oci_artifact_mount_support = true
	I1209 04:35:40.315456 1614600 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1209 04:35:40.315460 1614600 command_runner.go:130] > # CNI plugins.
	I1209 04:35:40.315463 1614600 command_runner.go:130] > [crio.network]
	I1209 04:35:40.315469 1614600 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1209 04:35:40.315475 1614600 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1209 04:35:40.315482 1614600 command_runner.go:130] > # cni_default_network = ""
	I1209 04:35:40.315488 1614600 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1209 04:35:40.315493 1614600 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1209 04:35:40.315503 1614600 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1209 04:35:40.315507 1614600 command_runner.go:130] > # plugin_dirs = [
	I1209 04:35:40.315515 1614600 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1209 04:35:40.315519 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315526 1614600 command_runner.go:130] > # List of included pod metrics.
	I1209 04:35:40.315530 1614600 command_runner.go:130] > # included_pod_metrics = [
	I1209 04:35:40.315533 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315539 1614600 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1209 04:35:40.315542 1614600 command_runner.go:130] > [crio.metrics]
	I1209 04:35:40.315547 1614600 command_runner.go:130] > # Globally enable or disable metrics support.
	I1209 04:35:40.315552 1614600 command_runner.go:130] > # enable_metrics = false
	I1209 04:35:40.315562 1614600 command_runner.go:130] > # Specify enabled metrics collectors.
	I1209 04:35:40.315567 1614600 command_runner.go:130] > # Per default all metrics are enabled.
	I1209 04:35:40.315573 1614600 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1209 04:35:40.315587 1614600 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1209 04:35:40.315593 1614600 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1209 04:35:40.315601 1614600 command_runner.go:130] > # metrics_collectors = [
	I1209 04:35:40.315605 1614600 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1209 04:35:40.315610 1614600 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1209 04:35:40.315614 1614600 command_runner.go:130] > # 	"containers_oom_total",
	I1209 04:35:40.315617 1614600 command_runner.go:130] > # 	"processes_defunct",
	I1209 04:35:40.315621 1614600 command_runner.go:130] > # 	"operations_total",
	I1209 04:35:40.315626 1614600 command_runner.go:130] > # 	"operations_latency_seconds",
	I1209 04:35:40.315630 1614600 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1209 04:35:40.315635 1614600 command_runner.go:130] > # 	"operations_errors_total",
	I1209 04:35:40.315638 1614600 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1209 04:35:40.315642 1614600 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1209 04:35:40.315646 1614600 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1209 04:35:40.315651 1614600 command_runner.go:130] > # 	"image_pulls_success_total",
	I1209 04:35:40.315661 1614600 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1209 04:35:40.315666 1614600 command_runner.go:130] > # 	"containers_oom_count_total",
	I1209 04:35:40.315675 1614600 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1209 04:35:40.315849 1614600 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1209 04:35:40.315864 1614600 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1209 04:35:40.315868 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.315880 1614600 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1209 04:35:40.315884 1614600 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1209 04:35:40.315889 1614600 command_runner.go:130] > # The port on which the metrics server will listen.
	I1209 04:35:40.315893 1614600 command_runner.go:130] > # metrics_port = 9090
	I1209 04:35:40.315899 1614600 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1209 04:35:40.315907 1614600 command_runner.go:130] > # metrics_socket = ""
	I1209 04:35:40.315912 1614600 command_runner.go:130] > # The certificate for the secure metrics server.
	I1209 04:35:40.315921 1614600 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1209 04:35:40.315929 1614600 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1209 04:35:40.315937 1614600 command_runner.go:130] > # certificate on any modification event.
	I1209 04:35:40.315944 1614600 command_runner.go:130] > # metrics_cert = ""
	I1209 04:35:40.315953 1614600 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1209 04:35:40.315959 1614600 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1209 04:35:40.315968 1614600 command_runner.go:130] > # metrics_key = ""
	I1209 04:35:40.315974 1614600 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1209 04:35:40.315982 1614600 command_runner.go:130] > [crio.tracing]
	I1209 04:35:40.315987 1614600 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1209 04:35:40.315996 1614600 command_runner.go:130] > # enable_tracing = false
	I1209 04:35:40.316002 1614600 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1209 04:35:40.316009 1614600 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1209 04:35:40.316017 1614600 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1209 04:35:40.316027 1614600 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1209 04:35:40.316032 1614600 command_runner.go:130] > # CRI-O NRI configuration.
	I1209 04:35:40.316035 1614600 command_runner.go:130] > [crio.nri]
	I1209 04:35:40.316040 1614600 command_runner.go:130] > # Globally enable or disable NRI.
	I1209 04:35:40.316043 1614600 command_runner.go:130] > # enable_nri = true
	I1209 04:35:40.316047 1614600 command_runner.go:130] > # NRI socket to listen on.
	I1209 04:35:40.316051 1614600 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1209 04:35:40.316055 1614600 command_runner.go:130] > # NRI plugin directory to use.
	I1209 04:35:40.316064 1614600 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1209 04:35:40.316069 1614600 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1209 04:35:40.316077 1614600 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1209 04:35:40.316083 1614600 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1209 04:35:40.316147 1614600 command_runner.go:130] > # nri_disable_connections = false
	I1209 04:35:40.316157 1614600 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1209 04:35:40.316162 1614600 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1209 04:35:40.316185 1614600 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1209 04:35:40.316193 1614600 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1209 04:35:40.316198 1614600 command_runner.go:130] > # NRI default validator configuration.
	I1209 04:35:40.316205 1614600 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1209 04:35:40.316215 1614600 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1209 04:35:40.316220 1614600 command_runner.go:130] > # can be restricted/rejected:
	I1209 04:35:40.316224 1614600 command_runner.go:130] > # - OCI hook injection
	I1209 04:35:40.316233 1614600 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1209 04:35:40.316238 1614600 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1209 04:35:40.316243 1614600 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1209 04:35:40.316247 1614600 command_runner.go:130] > # - adjustment of linux namespaces
	I1209 04:35:40.316254 1614600 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1209 04:35:40.316264 1614600 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1209 04:35:40.316271 1614600 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1209 04:35:40.316277 1614600 command_runner.go:130] > #
	I1209 04:35:40.316282 1614600 command_runner.go:130] > # [crio.nri.default_validator]
	I1209 04:35:40.316290 1614600 command_runner.go:130] > # nri_enable_default_validator = false
	I1209 04:35:40.316295 1614600 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1209 04:35:40.316307 1614600 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1209 04:35:40.316317 1614600 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1209 04:35:40.316322 1614600 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1209 04:35:40.316327 1614600 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1209 04:35:40.316480 1614600 command_runner.go:130] > # nri_validator_required_plugins = [
	I1209 04:35:40.316508 1614600 command_runner.go:130] > # ]
	I1209 04:35:40.316521 1614600 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1209 04:35:40.316528 1614600 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1209 04:35:40.316540 1614600 command_runner.go:130] > [crio.stats]
	I1209 04:35:40.316546 1614600 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1209 04:35:40.316551 1614600 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1209 04:35:40.316555 1614600 command_runner.go:130] > # stats_collection_period = 0
	I1209 04:35:40.316562 1614600 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1209 04:35:40.316572 1614600 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1209 04:35:40.316577 1614600 command_runner.go:130] > # collection_period = 0
	I1209 04:35:40.318311 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282255082Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1209 04:35:40.318330 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.2822971Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1209 04:35:40.318340 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282328904Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1209 04:35:40.318349 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282355243Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1209 04:35:40.318358 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282430665Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:35:40.318367 1614600 command_runner.go:130] ! time="2025-12-09T04:35:40.282713695Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1209 04:35:40.318382 1614600 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1209 04:35:40.318459 1614600 cni.go:84] Creating CNI manager for ""
	I1209 04:35:40.318484 1614600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:35:40.318506 1614600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:35:40.318532 1614600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:35:40.318689 1614600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:35:40.318765 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:35:40.328360 1614600 command_runner.go:130] > kubeadm
	I1209 04:35:40.328381 1614600 command_runner.go:130] > kubectl
	I1209 04:35:40.328387 1614600 command_runner.go:130] > kubelet
	I1209 04:35:40.329285 1614600 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:35:40.329353 1614600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:35:40.336944 1614600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:35:40.349970 1614600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:35:40.362809 1614600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1209 04:35:40.375503 1614600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:35:40.379345 1614600 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1209 04:35:40.379778 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:40.502305 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:41.326409 1614600 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:35:41.326563 1614600 certs.go:195] generating shared ca certs ...
	I1209 04:35:41.326611 1614600 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:41.326833 1614600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:35:41.326887 1614600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:35:41.326895 1614600 certs.go:257] generating profile certs ...
	I1209 04:35:41.327067 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:35:41.327129 1614600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:35:41.327233 1614600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:35:41.327250 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:35:41.327267 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:35:41.327279 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:35:41.327290 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:35:41.327349 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:35:41.327367 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:35:41.327413 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:35:41.327427 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:35:41.327509 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:35:41.327593 1614600 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:35:41.327604 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:35:41.327677 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:35:41.327750 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:35:41.327813 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:35:41.327913 1614600 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:35:41.327983 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.328001 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.328047 1614600 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.328720 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:35:41.349998 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:35:41.370613 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:35:41.391438 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:35:41.410483 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:35:41.429428 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:35:41.449234 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:35:41.468289 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:35:41.486148 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:35:41.504497 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:35:41.523111 1614600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:35:41.542281 1614600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:35:41.555566 1614600 ssh_runner.go:195] Run: openssl version
	I1209 04:35:41.561986 1614600 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1209 04:35:41.562090 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.569846 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:35:41.577817 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581778 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581849 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.581927 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:35:41.622889 1614600 command_runner.go:130] > 51391683
	I1209 04:35:41.623441 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:35:41.630995 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.638454 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:35:41.646110 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649703 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649815 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.649886 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:35:41.690940 1614600 command_runner.go:130] > 3ec20f2e
	I1209 04:35:41.691023 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:35:41.698710 1614600 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.705943 1614600 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:35:41.713451 1614600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717157 1614600 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717250 1614600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.717310 1614600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:35:41.757537 1614600 command_runner.go:130] > b5213941
	I1209 04:35:41.757976 1614600 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:35:41.765482 1614600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769213 1614600 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:35:41.769237 1614600 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1209 04:35:41.769244 1614600 command_runner.go:130] > Device: 259,1	Inode: 1322432     Links: 1
	I1209 04:35:41.769251 1614600 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1209 04:35:41.769256 1614600 command_runner.go:130] > Access: 2025-12-09 04:31:33.728838377 +0000
	I1209 04:35:41.769262 1614600 command_runner.go:130] > Modify: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769267 1614600 command_runner.go:130] > Change: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769272 1614600 command_runner.go:130] >  Birth: 2025-12-09 04:27:28.466831926 +0000
	I1209 04:35:41.769363 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:35:41.810027 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.810619 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:35:41.851168 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.851713 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:35:41.892758 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.892839 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:35:41.938176 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.938689 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:35:41.979665 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:41.980184 1614600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:35:42.021167 1614600 command_runner.go:130] > Certificate will not expire
	I1209 04:35:42.021686 1614600 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:35:42.021825 1614600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:35:42.021936 1614600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:35:42.052115 1614600 cri.go:89] found id: ""
	I1209 04:35:42.052191 1614600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:35:42.060116 1614600 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1209 04:35:42.060196 1614600 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1209 04:35:42.060220 1614600 command_runner.go:130] > /var/lib/minikube/etcd:
	I1209 04:35:42.061227 1614600 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:35:42.061247 1614600 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:35:42.061342 1614600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:35:42.070417 1614600 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:35:42.071064 1614600 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-331811" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.071256 1614600 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "functional-331811" cluster setting kubeconfig missing "functional-331811" context setting]
	I1209 04:35:42.071646 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.072159 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.072417 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.073140 1614600 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:35:42.073224 1614600 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:35:42.073266 1614600 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:35:42.073391 1614600 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:35:42.073418 1614600 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:35:42.073437 1614600 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:35:42.073813 1614600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:35:42.085766 1614600 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1209 04:35:42.085868 1614600 kubeadm.go:602] duration metric: took 24.612846ms to restartPrimaryControlPlane
	I1209 04:35:42.085898 1614600 kubeadm.go:403] duration metric: took 64.220222ms to StartCluster
	I1209 04:35:42.085947 1614600 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.086095 1614600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.086834 1614600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:35:42.087380 1614600 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:35:42.087524 1614600 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:35:42.087628 1614600 addons.go:70] Setting storage-provisioner=true in profile "functional-331811"
	I1209 04:35:42.087691 1614600 addons.go:239] Setting addon storage-provisioner=true in "functional-331811"
	I1209 04:35:42.087740 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.088325 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.087482 1614600 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:35:42.089019 1614600 addons.go:70] Setting default-storageclass=true in profile "functional-331811"
	I1209 04:35:42.089039 1614600 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-331811"
	I1209 04:35:42.089353 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.092155 1614600 out.go:179] * Verifying Kubernetes components...
	I1209 04:35:42.095248 1614600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:35:42.128430 1614600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:35:42.131623 1614600 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.131651 1614600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:35:42.131731 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.147694 1614600 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:35:42.147902 1614600 kapi.go:59] client config for functional-331811: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:35:42.148207 1614600 addons.go:239] Setting addon default-storageclass=true in "functional-331811"
	I1209 04:35:42.148248 1614600 host.go:66] Checking if "functional-331811" exists ...
	I1209 04:35:42.148712 1614600 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:35:42.182846 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.193184 1614600 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:42.193209 1614600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:35:42.193289 1614600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:35:42.220341 1614600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:35:42.327312 1614600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:35:42.346850 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:42.376931 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.076226 1614600 node_ready.go:35] waiting up to 6m0s for node "functional-331811" to be "Ready" ...
	I1209 04:35:43.076344 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.076396 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.076607 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076635 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076655 1614600 retry.go:31] will retry after 310.700454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076685 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.076702 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076708 1614600 retry.go:31] will retry after 282.763546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.076773 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.360393 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.387801 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.432930 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.433022 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.433059 1614600 retry.go:31] will retry after 489.220325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460835 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.460941 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.460967 1614600 retry.go:31] will retry after 355.931225ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.577252 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:43.577329 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:43.577711 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:43.817107 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:43.911473 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.915604 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.915640 1614600 retry.go:31] will retry after 537.488813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.922787 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:43.976592 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:43.980371 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:43.980407 1614600 retry.go:31] will retry after 753.380628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.077073 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.453574 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:44.512034 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.512090 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.512116 1614600 retry.go:31] will retry after 707.625417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.577247 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:44.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:44.577656 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:44.734008 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:44.795873 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:44.795936 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:44.795960 1614600 retry.go:31] will retry after 1.127913267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.077396 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.077480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.077910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:45.077993 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:45.220540 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:45.296909 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.296951 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.296996 1614600 retry.go:31] will retry after 917.152391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.577366 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:45.577441 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:45.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:45.924157 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:45.995176 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:45.995217 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:45.995239 1614600 retry.go:31] will retry after 1.420775217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.077446 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.077526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.077798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:46.215234 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:46.279745 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:46.279823 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.279850 1614600 retry.go:31] will retry after 1.336322791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:46.577242 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:46.577341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:46.577688 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.077361 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.077723 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:47.416255 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:47.477013 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.480365 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.480397 1614600 retry.go:31] will retry after 2.174557655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:47.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:47.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:47.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:47.617100 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:47.681529 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:47.681577 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:47.681598 1614600 retry.go:31] will retry after 3.276200411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:48.077115 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.077203 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.077555 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:48.577382 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:48.577481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:48.577821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.076798 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:49.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:49.576988 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:49.655381 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:49.715000 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:49.715035 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:49.715054 1614600 retry.go:31] will retry after 3.337758974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:50.077421 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.077518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.077847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:50.077903 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:50.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:50.576630 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:50.576967 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:50.958720 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:51.022646 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:51.022681 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.022700 1614600 retry.go:31] will retry after 4.624703928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:51.077048 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.077142 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.077474 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:51.577259 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:51.577334 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:51.577661 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.076656 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:52.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:52.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:52.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:52.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:53.053753 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:53.077246 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.077324 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.077594 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:53.113242 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:53.113284 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.113306 1614600 retry.go:31] will retry after 2.734988542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:53.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:53.576526 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:53.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:54.576551 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:54.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:54.577004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:54.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:55.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.076500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.076811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:55.576596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:55.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:55.648391 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:35:55.705094 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.708789 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.708820 1614600 retry.go:31] will retry after 6.736330921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.849034 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:35:55.918734 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:35:55.918780 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:55.918800 1614600 retry.go:31] will retry after 8.152075725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:35:56.077153 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.077636 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:56.577352 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:56.577427 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:56.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:56.577743 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:57.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.077499 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.077829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:57.576552 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:57.576635 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:57.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:58.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:58.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:58.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:35:59.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.076667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:35:59.077089 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:35:59.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:35:59.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:35:59.576805 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.076586 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:00.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:00.576616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:00.577002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.076587 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:01.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:01.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:01.576991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:02.077159 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.077237 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.077605 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:02.446164 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:02.502744 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:02.506462 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.506498 1614600 retry.go:31] will retry after 8.388840508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:02.576683 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:02.576758 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:02.577095 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.076604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.076977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:03.576704 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:03.576784 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:03.577119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:03.577179 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:04.071900 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:04.076533 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.076869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:04.150537 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:04.154620 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.154650 1614600 retry.go:31] will retry after 8.078270125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:04.577310 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:04.577452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:04.577816 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.076556 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.076634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:05.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:05.576672 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:05.576950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:06.076647 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:06.077129 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:06.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:06.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:06.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.076823 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.076900 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.077209 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:07.577024 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:07.577097 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:07.577441 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:08.077262 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:08.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:08.577265 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:08.577344 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:08.577616 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.077482 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.077835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:09.576413 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:09.576503 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:09.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.076504 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.076593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.076887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:10.576575 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:10.576673 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:10.576991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:10.577053 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:10.895548 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:10.953462 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:10.957148 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:10.957180 1614600 retry.go:31] will retry after 18.757746695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:11.076395 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.076478 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.076772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:11.576513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:11.576815 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.076936 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.077309 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:12.233682 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:12.292817 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:12.296392 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.296423 1614600 retry.go:31] will retry after 20.023788924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:12.576943 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:12.577019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:12.577364 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:12.577421 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:13.077108 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.077239 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.077603 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:13.577256 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:13.577343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:13.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.077313 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.077412 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.077731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:14.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:14.576496 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:14.576774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:15.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:15.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:15.576474 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:15.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:15.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.076431 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.076783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:16.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:16.576609 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:16.576956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:17.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.077082 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.077457 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:17.077514 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:17.577068 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:17.577144 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:17.577409 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.077285 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.077383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.077755 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:18.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:18.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:18.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.076597 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.076666 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:19.576602 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:19.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:19.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:19.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:20.076579 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.076658 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.076980 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:20.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:20.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:20.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.076594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:21.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:21.576638 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:21.576994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:22.077314 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.077388 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.077670 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:22.077714 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:22.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:22.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:22.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.076595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.076934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:23.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:23.576705 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:23.577060 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.076759 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.076839 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.077254 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:24.576837 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:24.576916 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:24.577306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:24.577364 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:25.077118 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.077190 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.077463 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:25.577272 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:25.577348 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:25.577737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.077403 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.077487 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.077842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:26.576440 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:26.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:26.576779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:27.076863 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.076944 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.077310 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:27.077367 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:27.577163 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:27.577241 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:27.577580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.077311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.077379 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:28.577399 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:28.577473 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:28.577808 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.076514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:29.576577 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:29.576646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:29.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:29.576955 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:29.715418 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:29.773517 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:29.777518 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:29.777549 1614600 retry.go:31] will retry after 13.466249075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:30.077059 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.077150 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.077512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:30.577014 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:30.577100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:30.577433 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.077181 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.077268 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.077521 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:31.577348 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:31.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:31.577801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:31.577857 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:32.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.076806 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.077154 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:32.320502 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:36:32.377593 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:32.381870 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.381909 1614600 retry.go:31] will retry after 28.435049856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:32.577214 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:32.577283 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:32.577547 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.077429 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.077516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.077823 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:33.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:33.576632 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:33.576978 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:34.076485 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.076922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:34.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:34.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:34.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:34.576951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.076511 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.076628 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.076979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:35.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:35.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:35.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.076491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.076926 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:36.576535 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:36.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:36.576977 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:36.577035 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:37.076803 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.076875 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.077215 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:37.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:37.577125 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:37.577459 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.077398 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.077495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.077876 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:38.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:38.576668 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:38.576989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:39.076692 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.077121 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:39.077180 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:39.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:39.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.076578 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:40.576532 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:40.576612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:40.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.077052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:41.576584 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:41.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:41.576937 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:41.576987 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:42.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.076556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:42.576531 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:42.576610 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:42.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.077002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:43.244488 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:36:43.308556 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:36:43.308599 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.308622 1614600 retry.go:31] will retry after 20.568808948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:36:43.577020 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:43.577099 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:43.577399 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:43.577456 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:44.077183 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.077280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.077609 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:44.577311 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:44.577390 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:44.577747 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:45.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.076692 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.077821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1209 04:36:45.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:45.576555 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:45.576880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:46.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.076531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.076837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:46.076889 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:46.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:46.576565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:46.576859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.076876 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.076949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.077253 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:47.577001 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:47.577079 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:47.577339 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:48.077087 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.077173 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.077495 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:48.077544 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:48.577135 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:48.577218 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:48.577531 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.077177 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.077246 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.077507 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:49.577363 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:49.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:49.577806 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.076584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:50.576621 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:50.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:50.577013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:50.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:51.076722 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.077123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:51.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:51.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:51.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.076970 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.077045 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.077314 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:52.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:52.577272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:52.577623 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:52.577685 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:53.076390 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.076468 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:53.577353 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:53.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:53.577714 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.076421 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.076508 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:54.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:54.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:54.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:55.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:55.077081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:55.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:55.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:55.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.076949 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:56.577383 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:56.577451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:56.577701 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:57.076714 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.076787 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.077117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:57.077170 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:36:57.576491 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:57.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:57.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.076535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.076850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:58.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:58.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:58.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.076574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:36:59.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:36:59.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:36:59.576972 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:36:59.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:00.076760 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.076863 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.077187 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.576907 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:00.576998 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:00.577391 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:00.817971 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:00.880147 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:00.880206 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:00.880224 1614600 retry.go:31] will retry after 16.46927575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1209 04:37:01.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.076543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.076797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:01.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:01.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:01.576960 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:02.076888 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.076961 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.077278 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:02.077329 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:02.576827 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:02.576905 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:02.577242 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.076885 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.077203 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:03.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:03.576886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:03.878560 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:37:03.937026 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940694 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:03.940802 1614600 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:04.077117 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.077194 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.077475 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:04.077526 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:04.577262 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:04.577353 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:04.577683 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.076432 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.076859 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:05.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:05.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:05.576819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:06.576622 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:06.576698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:06.577017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:06.577081 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:07.077327 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.077411 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.077929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:07.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:07.576583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:07.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.076696 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.077190 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:08.576870 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:08.576949 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:08.577244 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:08.577297 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:09.077127 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.077205 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:09.577337 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:09.577415 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:09.577756 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.076539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.076863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:10.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:10.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:10.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:11.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:11.077056 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:11.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:11.576515 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:11.576833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.076918 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.077013 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.077297 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:12.577105 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:12.577178 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:12.577483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:13.077233 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.077301 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.077597 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:13.077653 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:13.577407 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:13.577483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:13.577834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.076503 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:14.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:14.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:14.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.076512 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.076989 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:15.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:15.576653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:15.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:15.577067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:16.076438 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.076506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.076844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:16.576547 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:16.576641 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:16.576975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.076966 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.077042 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.077390 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:17.349771 1614600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:37:17.409388 1614600 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413192 1614600 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1209 04:37:17.413302 1614600 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1209 04:37:17.416242 1614600 out.go:179] * Enabled addons: 
	I1209 04:37:17.419770 1614600 addons.go:530] duration metric: took 1m35.33224358s for enable addons: enabled=[]
	I1209 04:37:17.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:17.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:17.576800 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:18.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.076914 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:18.076974 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:18.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:18.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:18.576933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.076609 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.076683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.077016 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:19.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:19.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:19.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:20.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.076704 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.077078 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:20.077138 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:20.576447 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:20.576514 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:20.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.076557 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.076645 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:21.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:21.576568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:21.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:22.076971 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.077046 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.077320 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:22.077371 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:22.577119 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:22.577200 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:22.577508 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.077228 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.077302 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.077678 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:23.577301 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:23.577385 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:23.577646 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:24.077387 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.077467 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.077801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:24.077859 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:24.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:24.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:24.576813 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.076516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.076845 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:25.576541 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:25.576634 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:25.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:26.576434 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:26.576510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:26.576842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:26.576894 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:27.077363 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.077438 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.077772 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:27.576489 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:27.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:27.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.076461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.076533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:28.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:28.576561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:28.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:29.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:29.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:29.576518 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:29.576604 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:29.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:30.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:30.576599 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:30.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:31.576748 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:31.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:31.577148 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:31.577206 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:32.077358 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.077437 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.077778 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:32.576461 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:32.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:32.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.076565 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:33.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:33.576689 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:33.577020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:34.076719 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.076790 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.077129 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:34.077191 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:34.576481 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:34.576554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:34.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.076695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.077045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:35.576555 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:35.576651 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:35.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.076520 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:36.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:36.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:36.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:36.576893 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:37.076700 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.076768 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.077025 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:37.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:37.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:37.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:38.576488 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:38.576566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:38.576841 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:39.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:39.076952 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:39.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:39.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:39.576911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.076451 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.076830 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:40.576466 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:40.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:40.576873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:41.076505 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.076918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:41.076977 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:41.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:41.576507 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:41.576804 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.076573 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.076649 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.077059 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:42.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:42.576872 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:42.577233 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:43.077479 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.077558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.077870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:43.077918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:43.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:43.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:43.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.076698 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.076780 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.077140 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:44.576789 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:44.576864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:44.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.076619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:45.576773 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:45.576852 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:45.577196 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:45.577268 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:46.076955 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.077032 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.077330 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:46.577091 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:46.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:46.577484 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.077355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.077435 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.077777 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:47.576343 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:47.576413 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:47.576709 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:48.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.076924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:48.076985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:48.576690 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:48.576786 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:48.577139 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.076493 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.076866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:49.576472 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:49.576550 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:49.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.076498 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.076580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:50.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:50.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:50.576825 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:50.576876 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:51.076471 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.076547 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.076833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:51.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:51.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:51.576853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.077018 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.077092 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.077387 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:52.577182 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:52.577265 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:52.577610 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:52.577668 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:53.077401 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.077481 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.077828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:53.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:53.576594 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:53.576849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.076956 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:54.576530 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:54.576600 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:54.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:55.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.076862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:55.076908 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:55.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:55.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:55.576971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.076686 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.076765 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.077127 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:56.576603 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:56.576676 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:56.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:57.077077 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.077153 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.077489 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:57.077549 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:37:57.577277 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:57.577362 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:57.577693 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.076355 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.076431 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.076691 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:58.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:58.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:58.576837 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.076605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:37:59.576429 1614600 type.go:168] "Request Body" body=""
	I1209 04:37:59.576509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:37:59.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:37:59.576883 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:00.076590 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.076684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.076994 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:00.576859 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:00.576953 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:00.577331 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.077097 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.077171 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.077483 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:01.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:01.577361 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:01.577744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:01.577806 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:02.076658 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.077088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:02.576471 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:02.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:02.576881 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.076526 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.076607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.076969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:03.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:03.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:03.577088 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:04.076785 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.076860 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.077186 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:04.077249 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:04.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:04.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:04.576888 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.076606 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.076685 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.077018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:05.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:05.576519 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:05.576866 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.076554 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.076961 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:06.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:06.576581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:06.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:06.576985 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:07.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.076824 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.077084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:07.576464 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:07.576543 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:07.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.076916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:08.576613 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:08.576683 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:08.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:09.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.076947 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:09.077009 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:09.576680 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:09.576755 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:09.577084 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.076460 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.076530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.076842 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:10.576484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:10.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:10.576899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:11.076596 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.076680 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.077014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:11.077067 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:11.576395 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:11.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:11.576732 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.076887 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.076960 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.077284 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:12.577054 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:12.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:12.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:13.077220 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.077295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:13.077607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:13.577421 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:13.577504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:13.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.076532 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.076618 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.076974 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:14.576647 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:14.576716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:14.577024 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.076742 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.076823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.077205 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:15.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:15.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:15.577458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:15.577510 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:16.076942 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.077018 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.077298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:16.577080 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:16.577154 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:16.577499 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.077222 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.077307 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.077621 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:17.577360 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:17.577430 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:17.577689 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:17.577730 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:18.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.076588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.076948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:18.576659 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:18.576737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:18.577070 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.076445 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.076523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.076847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:19.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:19.576553 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:19.576909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:20.076508 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.076586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.076942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:20.077015 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:20.577408 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:20.577485 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:20.577743 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.076529 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.076872 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:21.576583 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:21.576671 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:21.577011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.077118 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.077384 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:22.077433 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:22.577298 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:22.577383 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:22.577762 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.076559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.076896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:23.576459 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:23.576531 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:23.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.076595 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.077017 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:24.576721 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:24.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:24.577172 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:24.577228 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:25.076985 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.077057 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.077316 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:25.577081 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:25.577159 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:25.577525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.077428 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.077536 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.077886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:26.576422 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:26.576498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:26.576744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:27.076724 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.076800 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.077105 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:27.077166 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:27.576841 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:27.576921 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:27.577195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.076903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:28.576540 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:28.576626 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:28.576965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.076687 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.077094 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:29.576545 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:29.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:29.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:29.576958 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:30.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.076902 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:30.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:30.576577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:30.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.076559 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.076951 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:31.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:31.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:31.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:32.077036 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.077110 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.077432 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:32.077494 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:32.577246 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:32.577331 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:32.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.076404 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.076504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.076853 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:33.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:33.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:33.577018 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.076840 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:34.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:34.576575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:34.576892 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:34.576950 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:35.076629 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.076710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.077057 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:35.576738 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:35.576823 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:35.577124 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.076862 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.076938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.077291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:36.577100 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:36.577187 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:36.577528 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:36.577591 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:37.076430 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.076511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.076779 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:37.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:37.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:37.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.076577 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.076985 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:38.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:38.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:38.576896 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:39.076497 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.076900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:39.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:39.576601 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:39.576675 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:39.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.076482 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.076567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.076858 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:40.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:40.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:40.576936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.076484 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.076880 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:41.576426 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:41.576504 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:41.576818 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:41.576870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:42.077124 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.077219 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.077565 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:42.577244 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:42.577337 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:42.577684 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.077329 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.077428 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.077706 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:43.577282 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:43.577356 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:43.577731 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:43.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:44.077436 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.077527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.078002 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:44.576444 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:44.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:44.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.076652 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.077429 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:45.577123 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:45.577460 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:46.077241 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.077341 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.077667 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:46.077724 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:46.576427 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:46.576518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:46.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.076726 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.076801 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.077144 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:47.576574 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:47.576648 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:47.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.076626 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.076715 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.077126 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:48.576849 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:48.576930 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:48.577268 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:48.577334 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:49.077051 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.077394 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:49.577191 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:49.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:49.577582 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.077370 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.077454 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.077810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:50.576424 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:50.576502 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:50.576796 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:51.076506 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.076583 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.076910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:51.076969 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:51.576623 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:51.576749 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:51.577040 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.077085 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.077160 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.077422 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:52.577216 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:52.577295 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:52.577613 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:53.077392 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.077475 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.077797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:53.077856 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:53.576362 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:53.576448 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:53.576718 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.076568 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:54.576614 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:54.576695 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:54.577055 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.076745 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.076818 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.077132 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:55.576528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:55.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:55.576901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:55.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:56.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.076741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.077039 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:56.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:56.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:56.576717 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.076676 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.076750 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.077090 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:57.576453 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:57.576546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:57.576855 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:58.076528 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.076633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:38:58.076991 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:38:58.576513 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:58.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:58.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.076681 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.077015 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:38:59.576391 1614600 type.go:168] "Request Body" body=""
	I1209 04:38:59.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:38:59.576721 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.076562 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.076886 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:00.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:00.576622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:00.576958 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:00.577017 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:01.076583 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.076670 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.077008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:01.576525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:01.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:01.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.077021 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.077100 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:02.577124 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:02.577217 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:02.577512 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:02.577562 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:03.077323 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.077407 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.077775 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:03.576388 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:03.576462 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:03.576801 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.076589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.076927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:04.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:04.576586 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:04.576948 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:05.076534 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.076614 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.076965 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:05.077020 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:05.576441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:05.576512 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:05.576828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.076541 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.076627 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.076963 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:06.576692 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:06.576772 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:06.577111 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:07.076853 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.076924 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.077177 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:07.077219 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:07.576482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:07.576580 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:07.576924 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:08.576536 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:08.576605 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:08.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.076930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:09.576669 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:09.576753 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:09.577117 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:09.577174 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:10.076441 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.076525 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.076856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:10.576508 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:10.576584 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:10.576962 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.076664 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.077066 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:11.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:11.576687 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:11.576941 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:12.077176 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.077252 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.077629 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:12.077711 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:12.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:12.576516 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:12.576897 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.076570 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.076642 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:13.576510 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:13.576587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:13.576938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.076477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.076894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:14.576443 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:14.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:14.576831 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:14.576881 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:15.076545 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.076624 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:15.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:15.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:15.576870 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.076458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.076538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.076835 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:16.576450 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:16.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:16.576890 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:16.576949 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:17.076773 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.076853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.077193 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:17.576588 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:17.576661 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:17.576992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.076552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.076899 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:18.576718 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:18.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:18.577123 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:18.577182 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:19.076436 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.076822 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:19.576524 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:19.576621 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:19.576983 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.076486 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.076929 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:20.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:20.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:20.576928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:21.076622 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.077074 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:21.077128 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:21.576821 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:21.576903 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:21.577234 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.077298 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.077380 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:22.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:22.576459 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:22.576821 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.076525 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.076606 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:23.576410 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:23.576486 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:23.576738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:23.576788 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:24.076805 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.076886 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.077219 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:24.577078 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:24.577155 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:24.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.077571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.078098 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:25.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:25.576598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:25.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:25.576994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:26.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.076622 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.076931 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:26.576506 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:26.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:26.576844 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.076774 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.076849 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.077183 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:27.577039 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:27.577116 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:27.577462 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:27.577520 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:28.077111 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.077189 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.077451 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:28.577185 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:28.577261 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:28.577578 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.077440 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.077520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.077849 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:29.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:29.576538 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:29.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:30.076538 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.076629 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.076998 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:30.077061 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:30.576517 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:30.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:30.576923 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.076574 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.076653 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.076955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:31.576521 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:31.576595 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:31.576916 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:32.076880 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.076954 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.077270 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:32.077326 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:32.577067 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:32.577140 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:32.577413 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.077278 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.077360 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.077744 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:33.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:33.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:33.576898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.076561 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:34.576597 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:34.576678 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:34.577003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:34.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:35.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.076882 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:35.576445 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:35.576524 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:35.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.076904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:36.576487 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:36.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:37.076840 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.076915 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.077171 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:37.077211 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:37.576860 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:37.576938 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:37.577250 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.077017 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.077094 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.077417 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:38.577139 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:38.577221 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:38.577485 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:39.077239 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.077314 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.077657 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:39.077722 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:39.576442 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:39.576520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:39.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.076585 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.076663 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.076928 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:40.576498 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:40.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:40.576913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.076507 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.076590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.076933 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:41.576475 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:41.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:41.576856 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:41.576911 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:42.077042 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.077525 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:42.577190 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:42.577270 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:42.577607 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.076528 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.077049 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:43.576527 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:43.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:43.576993 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:43.577070 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:44.076789 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.076865 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.077206 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:44.576988 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:44.577058 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:44.577402 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.077482 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.077593 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.078175 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:45.577050 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:45.577162 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:45.577633 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:45.577692 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:46.077289 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.077367 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.077631 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:46.577385 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:46.577458 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:46.577783 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.076819 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.076895 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.077306 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:47.577090 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:47.577164 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:47.577430 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:48.077209 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.077287 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.077634 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:48.077694 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:48.576414 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:48.576492 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:48.576820 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.076429 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.076509 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.076812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:49.576499 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:49.576573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:49.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.076634 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.076716 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.077027 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:50.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:50.576535 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:50.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:50.576904 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:51.076500 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.076582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.076954 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:51.576645 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:51.576720 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:51.577036 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.077317 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.077391 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.077666 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:52.576377 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:52.576457 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:52.576786 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:53.076496 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.076575 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.076936 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:53.076994 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:53.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:53.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:53.576834 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.076579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.076906 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:54.576494 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:54.576578 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:54.576894 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.076446 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.076520 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.076829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:55.576458 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:55.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:55.576864 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:55.576922 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:56.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.077075 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:56.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:56.576684 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:56.576957 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.076915 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.076989 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.077329 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:57.577142 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:57.577223 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:57.577545 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:39:57.577606 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:39:58.077310 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.077382 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.077644 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:58.576398 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:58.576474 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:58.576810 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.076495 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.076569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.076901 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:39:59.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:39:59.576522 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:39:59.576814 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:00.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.076707 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.077051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:00.077102 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:00.576782 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:00.576892 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:00.577341 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.077110 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.077469 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:01.577344 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:01.577442 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:01.577802 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:02.077046 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.077122 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.077464 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:02.077524 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:02.577200 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:02.577280 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:02.577554 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.077335 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.077410 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.077751 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:03.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:03.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:03.576927 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.076986 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:04.576717 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:04.576802 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:04.577167 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:04.577233 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:05.077000 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.077083 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.077407 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:05.577162 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:05.577240 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:05.577561 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.077371 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.077455 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.077846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:06.576606 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:06.576686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:06.577045 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:07.076867 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.076956 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.077237 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:07.077285 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:07.577031 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:07.577112 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:07.577448 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.077143 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.077231 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.077595 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:08.577327 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:08.577403 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:08.577658 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.076424 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.076843 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:09.576572 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:09.576654 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:09.577008 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:09.577065 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:10.076510 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.076592 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.076913 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:10.576495 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:10.576569 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:10.576912 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.076619 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.077076 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:11.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:11.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:11.577096 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:11.577137 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:12.077236 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.077311 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.077690 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:12.576433 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:12.576521 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:12.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.076474 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.076548 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.076826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:13.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:13.576589 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:13.576934 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:14.076645 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.076722 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.077046 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:14.077105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:14.576455 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:14.576537 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:14.576860 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.076501 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.076587 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.076968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:15.576689 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:15.576770 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:15.577097 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.076449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.076527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.076791 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:16.576477 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:16.576559 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:16.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:16.576962 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:17.076730 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.076809 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.077145 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:17.576557 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:17.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:17.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.076564 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.076935 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:18.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:18.576582 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:18.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:19.076426 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.076498 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.076819 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:19.076870 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:19.576490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:19.576567 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:19.576904 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.076514 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.076996 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:20.576452 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:20.576533 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:20.576869 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:21.076479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.076558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.076898 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:21.076954 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:21.576671 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:21.576745 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:21.577092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.077101 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.077188 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.077458 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:22.577307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:22.577395 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:22.577780 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.076488 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.076566 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.076905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:23.576594 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:23.576667 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:23.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:23.577044 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:24.076716 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.076812 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.077201 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:24.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:24.577098 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:24.577427 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.077197 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.077272 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.077553 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:25.577396 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:25.577471 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:25.577807 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:25.577866 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:26.076551 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.076646 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:26.576462 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:26.576534 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:26.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.076813 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.076897 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.077258 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:27.577061 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:27.577148 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:27.577479 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:28.077203 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.077282 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.077580 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:28.077625 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:28.576412 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:28.576489 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:28.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.076502 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.076581 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:29.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:29.576712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:29.576969 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.076611 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.077034 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:30.576765 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:30.576846 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:30.577180 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:30.577234 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:31.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.076979 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.077238 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:31.577016 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:31.577093 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:31.577496 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.077307 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.077384 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.077722 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:32.576465 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:32.576539 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:32.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:33.076490 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.076563 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.076911 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:33.076973 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:33.576529 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:33.576607 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:33.576968 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.076674 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.076761 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.077041 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:34.576509 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:34.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:34.576964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:35.076695 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.076799 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.077151 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:35.077212 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:35.576777 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:35.576847 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:35.577114 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.076516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.076591 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.076925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:36.576496 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:36.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:36.576862 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.076779 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.076855 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.077112 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:37.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:37.576556 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:37.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:37.576915 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:38.076487 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.076570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.077013 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:38.576449 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:38.576523 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:38.576839 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.076527 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.076608 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.076938 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:39.576653 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:39.576731 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:39.577063 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:39.577116 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:40.076444 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.076518 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.076828 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:40.576473 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:40.576552 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:40.576874 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.076569 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.076652 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.077011 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:41.576534 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:41.576619 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:41.576925 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:42.077395 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.077483 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.077909 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:42.078001 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:42.576664 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:42.576741 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:42.577081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.076642 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.076713 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.077006 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:43.576492 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:43.576571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:43.576907 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.076576 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.076879 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:44.576522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:44.576597 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:44.576903 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:44.576957 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:45.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.076615 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.077092 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:45.576710 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:45.576785 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:45.577104 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.076467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.076542 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.076809 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:46.576463 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:46.576544 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:46.576867 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:47.076788 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.076864 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.077245 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:47.077300 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:47.576416 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:47.576497 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:47.576797 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.076992 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:48.576737 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:48.576822 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:48.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.076459 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.076532 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.076827 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:49.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:49.576585 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:49.576979 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:49.577037 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:50.076712 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.076793 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.077113 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:50.576457 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:50.576530 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:50.576900 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.076607 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.076686 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.077038 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:51.576760 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:51.576835 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:51.577164 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:51.577220 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:52.077315 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.077402 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.077698 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:52.576467 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:52.576558 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:52.576921 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.076656 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.076733 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.077077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:53.576420 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:53.576495 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:53.576776 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:54.076522 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.076946 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:54.077005 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:54.576713 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:54.576789 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:54.577077 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.076751 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.076830 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.077119 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:55.576516 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:55.576590 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:55.576893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:56.076633 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.076712 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:56.077057 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:56.576546 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:56.576617 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:56.576885 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.076904 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.076984 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.077287 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:57.577072 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:57.577156 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:57.577468 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:58.077202 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.077274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.077543 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:40:58.077586 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:40:58.577422 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:58.577500 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:58.577833 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.076973 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:40:59.576658 1614600 type.go:168] "Request Body" body=""
	I1209 04:40:59.576742 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:40:59.577051 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.076592 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.076674 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.077010 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:00.576861 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:00.576941 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:00.577298 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:00.577372 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:01.077099 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.077168 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.077505 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:01.577309 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:01.577392 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:01.577699 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.076372 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.076451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.076749 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:02.576406 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:02.576484 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:02.576852 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:03.076591 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.076792 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.077195 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:03.077250 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:03.576825 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:03.576906 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:03.577274 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.076812 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.076893 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:04.577138 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:04.577214 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:04.577536 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:05.077263 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.077343 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.077665 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:05.077723 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:05.576380 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:05.576451 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:05.576771 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.076472 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.076554 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.076889 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:06.576483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:06.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:06.576878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.076816 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.077173 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:07.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:07.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:07.576865 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:07.576918 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:08.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.076616 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.077003 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:08.576544 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:08.576620 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:08.576943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.076478 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.076560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.076893 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:09.576500 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:09.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:09.576908 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:09.576964 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:10.076483 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.076557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.076873 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:10.576497 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:10.576579 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:10.576942 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.076653 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.076738 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.077082 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:11.576454 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:11.576527 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:11.576850 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:12.077093 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.077172 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.077480 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:12.077535 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:12.577297 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:12.577374 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:12.577704 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.076405 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.076480 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.076737 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:13.576468 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:13.576545 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:13.576887 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.076691 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:14.576620 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:14.576693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:14.576955 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:14.576999 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:15.076684 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.076776 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.077081 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:15.576779 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:15.576853 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:15.577200 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.076568 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.076639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.076920 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:16.576637 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:16.576710 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:16.577052 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:16.577105 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:17.076817 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.076891 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.077226 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:17.576383 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:17.576453 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:17.576788 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.076519 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.076603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.076964 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:18.576667 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:18.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:18.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:18.577127 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:19.076439 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.076510 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.076761 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:19.576436 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:19.576511 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:19.576847 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.076523 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.076612 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.077004 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:20.576560 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:20.576633 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:20.576959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:21.076661 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.076737 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.077147 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:21.077209 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:21.576890 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:21.576967 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:21.577291 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.077043 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.077129 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.077436 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:22.577198 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:22.577279 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:22.577606 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.076378 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.076452 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.076785 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:23.576403 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:23.576491 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:23.576812 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:23.576864 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:24.076524 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.076950 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:24.576479 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:24.576557 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:24.576922 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.076617 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.076698 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.076975 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:25.576425 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:25.576506 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:25.576863 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:25.576919 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:26.076427 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.076505 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.076878 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:26.576569 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:26.576639 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:26.576910 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.076922 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.076997 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.077305 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:27.577103 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:27.577175 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:27.577550 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:27.577607 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:28.077345 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.077414 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.077671 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:28.577417 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:28.577513 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:28.577846 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.076489 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.076571 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.076943 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:29.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:29.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:29.576905 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:30.076521 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.076601 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.076966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:30.077050 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:30.576505 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:30.576603 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:30.576966 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.076669 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.076744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.077007 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:31.576502 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:31.576574 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:31.576918 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:32.076988 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.077068 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.077435 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:32.077497 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:32.577199 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:32.577274 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:32.577539 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.077339 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.077443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.077811 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:33.576503 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:33.576588 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:33.576930 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.076499 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.076573 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.076861 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:34.576573 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:34.576657 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:34.577014 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:34.577071 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:35.076473 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.076546 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.076895 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:35.576501 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:35.576570 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:35.576829 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.076518 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.076598 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.076971 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:36.576553 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:36.576637 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:36.577032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:37.076948 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.077019 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.077352 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:37.077398 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:37.577132 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:37.577216 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:37.577592 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.077367 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.077444 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.077774 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:38.576480 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:38.576549 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:38.576826 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.076517 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.076596 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.077020 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:39.576754 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:39.576834 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:39.577168 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:39.577222 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:40.076627 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.076703 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.076991 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:40.576486 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:40.576560 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:40.576891 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.076611 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.076693 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.077032 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:41.577374 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:41.577443 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:41.577738 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1209 04:41:41.577796 1614600 node_ready.go:55] error getting node "functional-331811" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-331811": dial tcp 192.168.49.2:8441: connect: connection refused
	I1209 04:41:42.076410 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.076517 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.076959 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:42.576665 1614600 type.go:168] "Request Body" body=""
	I1209 04:41:42.576744 1614600 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-331811" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1209 04:41:42.577069 1614600 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1209 04:41:43.076660 1614600 node_ready.go:38] duration metric: took 6m0.000391304s for node "functional-331811" to be "Ready" ...
	I1209 04:41:43.080060 1614600 out.go:203] 
	W1209 04:41:43.083006 1614600 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 04:41:43.083030 1614600 out.go:285] * 
	W1209 04:41:43.085173 1614600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:41:43.088614 1614600 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:41:52 functional-331811 crio[5392]: time="2025-12-09T04:41:52.495480805Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1b07b985-4d88-4211-a868-754c2842560d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571076374Z" level=info msg="Checking image status: minikube-local-cache-test:functional-331811" id=7673cc8c-89c1-4a84-8a18-b3b039a63ff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571286165Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571344077Z" level=info msg="Image minikube-local-cache-test:functional-331811 not found" id=7673cc8c-89c1-4a84-8a18-b3b039a63ff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.571451492Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-331811 found" id=7673cc8c-89c1-4a84-8a18-b3b039a63ff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.596017684Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-331811" id=5fc4b7b0-d579-4f99-9fe9-2cdeab8984cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.596202818Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-331811 not found" id=5fc4b7b0-d579-4f99-9fe9-2cdeab8984cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.596262707Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-331811 found" id=5fc4b7b0-d579-4f99-9fe9-2cdeab8984cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.62409553Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-331811" id=cdf26a85-05d7-478c-921b-c7cfd547e778 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.624254367Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-331811 not found" id=cdf26a85-05d7-478c-921b-c7cfd547e778 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:53 functional-331811 crio[5392]: time="2025-12-09T04:41:53.624315922Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-331811 found" id=cdf26a85-05d7-478c-921b-c7cfd547e778 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.60951842Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f4b4abbe-d973-456f-90ca-4da5f8983018 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.93936094Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=427106ed-a66d-43e6-a017-1cd0025c39fe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.939510652Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=427106ed-a66d-43e6-a017-1cd0025c39fe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:54 functional-331811 crio[5392]: time="2025-12-09T04:41:54.939551054Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=427106ed-a66d-43e6-a017-1cd0025c39fe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.635199405Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9d1fcc30-e692-4f4c-a0f4-5fadf0221e50 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.635350495Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9d1fcc30-e692-4f4c-a0f4-5fadf0221e50 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.635402557Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9d1fcc30-e692-4f4c-a0f4-5fadf0221e50 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.660171631Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d94e6396-8709-4670-b3c7-435cd438defe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.660323928Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=d94e6396-8709-4670-b3c7-435cd438defe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.660362107Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=d94e6396-8709-4670-b3c7-435cd438defe name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.684198232Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=eb6e5bea-4040-4e55-bf42-8d86d17b24ec name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.684352704Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=eb6e5bea-4040-4e55-bf42-8d86d17b24ec name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:55 functional-331811 crio[5392]: time="2025-12-09T04:41:55.684396798Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=eb6e5bea-4040-4e55-bf42-8d86d17b24ec name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:41:56 functional-331811 crio[5392]: time="2025-12-09T04:41:56.235761601Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=02beed57-a79e-4bda-819d-650c188a8e7a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:42:00.545222    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:42:00.545970    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:42:00.547541    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:42:00.548039    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:42:00.549566    9597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:42:00 up  9:24,  0 user,  load average: 0.59, 0.38, 0.77
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:41:58 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:58 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 09 04:41:58 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:58 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:58 functional-331811 kubelet[9485]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:58 functional-331811 kubelet[9485]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:58 functional-331811 kubelet[9485]: E1209 04:41:58.826457    9485 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:58 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:58 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:41:59 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Dec 09 04:41:59 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:59 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:41:59 functional-331811 kubelet[9508]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:59 functional-331811 kubelet[9508]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:41:59 functional-331811 kubelet[9508]: E1209 04:41:59.678513    9508 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:41:59 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:41:59 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:42:00 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1160.
	Dec 09 04:42:00 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:42:00 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:42:00 functional-331811 kubelet[9576]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:42:00 functional-331811 kubelet[9576]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:42:00 functional-331811 kubelet[9576]: E1209 04:42:00.475886    9576 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:42:00 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:42:00 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (375.767245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-331811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 04:44:31.980228 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:46:21.786777 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:47:44.851310 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:49:31.980016 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:51:21.787803 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-331811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.534821407s)

                                                
                                                
-- stdout --
	* [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000344141s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-331811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.536188058s for "functional-331811" cluster.
I1209 04:54:14.126828 1580521 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (315.974751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 logs -n 25: (1.003175264s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-790468 image ls --format short --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh     │ functional-790468 ssh pgrep buildkitd                                                                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image   │ functional-790468 image ls --format json --alsologtostderr                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls --format table --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                            │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls                                                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete  │ -p functional-790468                                                                                                                              │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start   │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ start   │ -p functional-331811 --alsologtostderr -v=8                                                                                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:35 UTC │                     │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:latest                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add minikube-local-cache-test:functional-331811                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache delete minikube-local-cache-test:functional-331811                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl images                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ cache   │ functional-331811 cache reload                                                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ kubectl │ functional-331811 kubectl -- --context functional-331811 get pods                                                                                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ start   │ -p functional-331811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:42:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:42:01.637786 1620518 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:42:01.637909 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.637913 1620518 out.go:374] Setting ErrFile to fd 2...
	I1209 04:42:01.637918 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.638166 1620518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:42:01.638522 1620518 out.go:368] Setting JSON to false
	I1209 04:42:01.639450 1620518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33862,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:42:01.639510 1620518 start.go:143] virtualization:  
	I1209 04:42:01.642955 1620518 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:42:01.646014 1620518 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:42:01.646101 1620518 notify.go:221] Checking for updates...
	I1209 04:42:01.651837 1620518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:42:01.654857 1620518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:42:01.657670 1620518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:42:01.660510 1620518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:42:01.663383 1620518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:42:01.666731 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:01.666828 1620518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:42:01.689070 1620518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:42:01.689175 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.744025 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.734708732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.744121 1620518 docker.go:319] overlay module found
	I1209 04:42:01.749121 1620518 out.go:179] * Using the docker driver based on existing profile
	I1209 04:42:01.751932 1620518 start.go:309] selected driver: docker
	I1209 04:42:01.751941 1620518 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.752051 1620518 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:42:01.752158 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.824076 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.81179321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.824456 1620518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:42:01.824480 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:01.824537 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:01.824578 1620518 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.827700 1620518 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:42:01.830624 1620518 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:42:01.833519 1620518 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:42:01.836178 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:01.836217 1620518 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:42:01.836228 1620518 cache.go:65] Caching tarball of preloaded images
	I1209 04:42:01.836255 1620518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:42:01.836324 1620518 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:42:01.836333 1620518 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:42:01.836451 1620518 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:42:01.855430 1620518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:42:01.855441 1620518 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:42:01.855455 1620518 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:42:01.855485 1620518 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:42:01.855543 1620518 start.go:364] duration metric: took 40.87µs to acquireMachinesLock for "functional-331811"
	I1209 04:42:01.855566 1620518 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:42:01.855570 1620518 fix.go:54] fixHost starting: 
	I1209 04:42:01.855819 1620518 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:42:01.873325 1620518 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:42:01.873351 1620518 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:42:01.876665 1620518 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:42:01.876693 1620518 machine.go:94] provisionDockerMachine start ...
	I1209 04:42:01.876797 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:01.894796 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:01.895121 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:01.895129 1620518 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:42:02.058680 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.058696 1620518 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:42:02.058761 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.090920 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.091365 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.091379 1620518 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:42:02.262883 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.262960 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.281315 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.281623 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.281637 1620518 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:42:02.435135 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:42:02.435152 1620518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:42:02.435179 1620518 ubuntu.go:190] setting up certificates
	I1209 04:42:02.435197 1620518 provision.go:84] configureAuth start
	I1209 04:42:02.435267 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:02.452748 1620518 provision.go:143] copyHostCerts
	I1209 04:42:02.452806 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:42:02.452813 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:42:02.452891 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:42:02.452996 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:42:02.453000 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:42:02.453027 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:42:02.453088 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:42:02.453092 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:42:02.453121 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:42:02.453207 1620518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:42:02.729112 1620518 provision.go:177] copyRemoteCerts
	I1209 04:42:02.729174 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:42:02.729226 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.747750 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:02.856241 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:42:02.877475 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:42:02.898967 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:42:02.917189 1620518 provision.go:87] duration metric: took 481.970064ms to configureAuth
	I1209 04:42:02.917207 1620518 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:42:02.917407 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:02.917510 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.935642 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.935957 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.935968 1620518 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:42:03.293502 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:42:03.293517 1620518 machine.go:97] duration metric: took 1.416817164s to provisionDockerMachine
	I1209 04:42:03.293527 1620518 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:42:03.293537 1620518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:42:03.293597 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:42:03.293653 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.312696 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.419010 1620518 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:42:03.422897 1620518 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:42:03.422917 1620518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:42:03.422927 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:42:03.422995 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:42:03.423075 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:42:03.423167 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:42:03.423212 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:42:03.431449 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:03.450423 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:42:03.470159 1620518 start.go:296] duration metric: took 176.617533ms for postStartSetup
	I1209 04:42:03.470235 1620518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:42:03.470292 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.488346 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.593519 1620518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:42:03.598841 1620518 fix.go:56] duration metric: took 1.743264094s for fixHost
	I1209 04:42:03.598859 1620518 start.go:83] releasing machines lock for "functional-331811", held for 1.743308418s
	I1209 04:42:03.598929 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:03.617266 1620518 ssh_runner.go:195] Run: cat /version.json
	I1209 04:42:03.617315 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.617558 1620518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:42:03.617603 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.646611 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.653495 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.852499 1620518 ssh_runner.go:195] Run: systemctl --version
	I1209 04:42:03.859513 1620518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:42:03.897674 1620518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:42:03.902590 1620518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:42:03.902664 1620518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:42:03.911194 1620518 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:42:03.911208 1620518 start.go:496] detecting cgroup driver to use...
	I1209 04:42:03.911240 1620518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:42:03.911304 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:42:03.926479 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:42:03.940314 1620518 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:42:03.940374 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:42:03.956989 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:42:03.970857 1620518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:42:04.105722 1620518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:42:04.221024 1620518 docker.go:234] disabling docker service ...
	I1209 04:42:04.221082 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:42:04.236606 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:42:04.259126 1620518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:42:04.406348 1620518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:42:04.537870 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:42:04.550770 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:42:04.565609 1620518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:42:04.565666 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.574449 1620518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:42:04.574512 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.583819 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.592696 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.601828 1620518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:42:04.610342 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.619401 1620518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.628176 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.637069 1620518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:42:04.644806 1620518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:42:04.652309 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:04.767112 1620518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:42:04.935446 1620518 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:42:04.935507 1620518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:42:04.939304 1620518 start.go:564] Will wait 60s for crictl version
	I1209 04:42:04.939369 1620518 ssh_runner.go:195] Run: which crictl
	I1209 04:42:04.942772 1620518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:42:04.967172 1620518 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:42:04.967246 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.000450 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.039508 1620518 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:42:05.042351 1620518 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:42:05.058209 1620518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:42:05.065398 1620518 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:42:05.068071 1620518 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:42:05.068222 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:05.068288 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.125308 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.125320 1620518 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:42:05.125384 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.156125 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.156137 1620518 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:42:05.156143 1620518 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:42:05.156245 1620518 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:42:05.156329 1620518 ssh_runner.go:195] Run: crio config
	I1209 04:42:05.230295 1620518 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:42:05.230327 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:05.230335 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:05.230348 1620518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:42:05.230371 1620518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:42:05.230520 1620518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:42:05.230600 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:42:05.238799 1620518 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:42:05.238882 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:42:05.246819 1620518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:42:05.260010 1620518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:42:05.273192 1620518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1209 04:42:05.287174 1620518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:42:05.291010 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:05.412581 1620518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:42:05.825078 1620518 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:42:05.825089 1620518 certs.go:195] generating shared ca certs ...
	I1209 04:42:05.825104 1620518 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:42:05.825273 1620518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:42:05.825311 1620518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:42:05.825317 1620518 certs.go:257] generating profile certs ...
	I1209 04:42:05.825400 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:42:05.825453 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:42:05.825489 1620518 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:42:05.825606 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:42:05.825637 1620518 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:42:05.825643 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:42:05.825670 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:42:05.825692 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:42:05.825717 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:42:05.825764 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:05.826339 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:42:05.847398 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:42:05.867264 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:42:05.887896 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:42:05.907076 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:42:05.926224 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:42:05.944236 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:42:05.962834 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:42:05.981333 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:42:06.001204 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:42:06.024226 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:42:06.044638 1620518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:42:06.059443 1620518 ssh_runner.go:195] Run: openssl version
	I1209 04:42:06.066215 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.074237 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:42:06.083015 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087232 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087310 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.129553 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:42:06.137400 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.144988 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:42:06.152871 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156811 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156876 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.198268 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:42:06.205673 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.212766 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:42:06.220239 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.223985 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.224039 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.265241 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:42:06.272666 1620518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:42:06.276249 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:42:06.318459 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:42:06.361504 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:42:06.402819 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:42:06.443793 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:42:06.485065 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:42:06.526159 1620518 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:06.526240 1620518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:42:06.526302 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.557743 1620518 cri.go:89] found id: ""
	I1209 04:42:06.557806 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:42:06.565919 1620518 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:42:06.565929 1620518 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:42:06.565979 1620518 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:42:06.574421 1620518 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.574975 1620518 kubeconfig.go:125] found "functional-331811" server: "https://192.168.49.2:8441"
	I1209 04:42:06.576238 1620518 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:42:06.585800 1620518 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:27:27.994828232 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:42:05.282481991 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:42:06.585820 1620518 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:42:06.585830 1620518 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 04:42:06.585887 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.615364 1620518 cri.go:89] found id: ""
	I1209 04:42:06.615424 1620518 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:42:06.632416 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:42:06.640276 1620518 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  9 04:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  9 04:31 /etc/kubernetes/scheduler.conf
	
	I1209 04:42:06.640334 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:42:06.648234 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:42:06.655526 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.655581 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:42:06.663036 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.670853 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.670911 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.678990 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:42:06.687863 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.687915 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:42:06.696417 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:42:06.705368 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:06.756797 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.115058 1620518 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.358236541s)
	I1209 04:42:08.115116 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.320381 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.380846 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.425206 1620518 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:42:08.425277 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:08.925770 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.425673 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.926006 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.426138 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.926333 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.426044 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.925865 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.426407 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.925704 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.425999 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.926113 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.426341 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.926036 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.425471 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.926251 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.426322 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.925477 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.426300 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.926252 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.426140 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.925451 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.426343 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.925709 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.426256 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.925497 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.425570 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.926150 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.425937 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.926432 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.425437 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.926221 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.425823 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.926268 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.426017 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.926031 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.425377 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.925360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.425992 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.925571 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.425482 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.926361 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.426063 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.926242 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.425494 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.926061 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.425707 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.925370 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.426205 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.926119 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.426163 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.925480 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.425584 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.926360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.426207 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.926064 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.426077 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.925371 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.426110 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.425443 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.926209 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.426345 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.925457 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.426372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.926174 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.426131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.926382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.426266 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.926376 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.425722 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.925468 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.425612 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.925853 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.425892 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.925441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.425589 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.926038 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.425591 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.926409 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.426312 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.925878 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.425458 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.925689 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.426143 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.926139 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.426335 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.926396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.425396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.925485 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.425608 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.925545 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.425421 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.925703 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.925392 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.426241 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.925364 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.425372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.425848 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.925784 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.425624 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.425417 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.926188 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.426323 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.925858 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.425747 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.926082 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.925448 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.425655 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.925700 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.926215 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.425795 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.925648 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:08.425431 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:08.425513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:08.451611 1620518 cri.go:89] found id: ""
	I1209 04:43:08.451625 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.451634 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:08.451644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:08.451703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:08.478028 1620518 cri.go:89] found id: ""
	I1209 04:43:08.478042 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.478049 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:08.478054 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:08.478116 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:08.504952 1620518 cri.go:89] found id: ""
	I1209 04:43:08.504967 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.504974 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:08.504980 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:08.505037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:08.531444 1620518 cri.go:89] found id: ""
	I1209 04:43:08.531460 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.531468 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:08.531473 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:08.531558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:08.557796 1620518 cri.go:89] found id: ""
	I1209 04:43:08.557810 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.557817 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:08.557822 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:08.557878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:08.589421 1620518 cri.go:89] found id: ""
	I1209 04:43:08.589436 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.589443 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:08.589448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:08.589505 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:08.626762 1620518 cri.go:89] found id: ""
	I1209 04:43:08.626776 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.626783 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:08.626792 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:08.626802 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:08.694456 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:08.694477 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:08.709310 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:08.709333 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:08.773551 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:08.773573 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:08.773584 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:08.840868 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:08.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.374296 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:11.384818 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:11.384880 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:11.413700 1620518 cri.go:89] found id: ""
	I1209 04:43:11.413713 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.413720 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:11.413725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:11.413783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:11.439148 1620518 cri.go:89] found id: ""
	I1209 04:43:11.439163 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.439170 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:11.439175 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:11.439236 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:11.468833 1620518 cri.go:89] found id: ""
	I1209 04:43:11.468847 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.468854 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:11.468859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:11.468917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:11.501328 1620518 cri.go:89] found id: ""
	I1209 04:43:11.501343 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.501350 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:11.501355 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:11.501420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:11.527673 1620518 cri.go:89] found id: ""
	I1209 04:43:11.527687 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.527695 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:11.527700 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:11.527757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:11.552531 1620518 cri.go:89] found id: ""
	I1209 04:43:11.552545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.552552 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:11.552557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:11.552618 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:11.591493 1620518 cri.go:89] found id: ""
	I1209 04:43:11.591507 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.591514 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:11.591522 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:11.591538 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.626001 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:11.626017 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:11.699914 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:11.699939 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:11.715894 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:11.715917 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:11.780735 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:11.780754 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:11.780765 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.352369 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:14.362558 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:14.362633 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:14.388407 1620518 cri.go:89] found id: ""
	I1209 04:43:14.388421 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.388428 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:14.388433 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:14.388490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:14.415937 1620518 cri.go:89] found id: ""
	I1209 04:43:14.415952 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.415960 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:14.415965 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:14.416029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:14.445418 1620518 cri.go:89] found id: ""
	I1209 04:43:14.445433 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.445440 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:14.445445 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:14.445513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:14.471362 1620518 cri.go:89] found id: ""
	I1209 04:43:14.471376 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.471383 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:14.471388 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:14.471452 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:14.503134 1620518 cri.go:89] found id: ""
	I1209 04:43:14.503148 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.503155 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:14.503160 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:14.503219 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:14.529790 1620518 cri.go:89] found id: ""
	I1209 04:43:14.529803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.529811 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:14.529816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:14.529889 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:14.555803 1620518 cri.go:89] found id: ""
	I1209 04:43:14.555817 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.555824 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:14.555832 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:14.555843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:14.632593 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:14.632611 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:14.648671 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:14.648687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:14.713371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:14.713382 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:14.713400 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.783824 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:14.783843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.318936 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:17.329339 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:17.329407 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:17.356311 1620518 cri.go:89] found id: ""
	I1209 04:43:17.356330 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.356351 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:17.356356 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:17.356416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:17.386438 1620518 cri.go:89] found id: ""
	I1209 04:43:17.386452 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.386460 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:17.386465 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:17.386528 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:17.411209 1620518 cri.go:89] found id: ""
	I1209 04:43:17.411222 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.411229 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:17.411234 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:17.411291 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:17.437189 1620518 cri.go:89] found id: ""
	I1209 04:43:17.437201 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.437208 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:17.437229 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:17.437286 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:17.463836 1620518 cri.go:89] found id: ""
	I1209 04:43:17.463850 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.463857 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:17.463862 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:17.463945 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:17.490604 1620518 cri.go:89] found id: ""
	I1209 04:43:17.490617 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.490625 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:17.490630 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:17.490691 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:17.517583 1620518 cri.go:89] found id: ""
	I1209 04:43:17.517597 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.517605 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:17.517612 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:17.517623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:17.532622 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:17.532638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:17.611464 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:17.611477 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:17.611487 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:17.693672 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:17.693692 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.723232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:17.723249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.294145 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:20.304681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:20.304742 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:20.333282 1620518 cri.go:89] found id: ""
	I1209 04:43:20.333297 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.333304 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:20.333309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:20.333367 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:20.363210 1620518 cri.go:89] found id: ""
	I1209 04:43:20.363224 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.363231 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:20.363236 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:20.363300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:20.387964 1620518 cri.go:89] found id: ""
	I1209 04:43:20.387978 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.387985 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:20.387995 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:20.388054 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:20.414851 1620518 cri.go:89] found id: ""
	I1209 04:43:20.414864 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.414871 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:20.414876 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:20.414943 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:20.441500 1620518 cri.go:89] found id: ""
	I1209 04:43:20.441514 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.441521 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:20.441526 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:20.441584 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:20.468302 1620518 cri.go:89] found id: ""
	I1209 04:43:20.468318 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.468325 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:20.468331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:20.468393 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:20.497314 1620518 cri.go:89] found id: ""
	I1209 04:43:20.497328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.497345 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:20.497354 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:20.497364 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.570464 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:20.570492 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:20.586642 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:20.586660 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:20.665367 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:20.665378 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:20.665389 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:20.733648 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:20.733669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.265697 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:23.275834 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:23.275893 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:23.304587 1620518 cri.go:89] found id: ""
	I1209 04:43:23.304613 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.304620 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:23.304626 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:23.304692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:23.329381 1620518 cri.go:89] found id: ""
	I1209 04:43:23.329406 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.329414 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:23.329419 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:23.329485 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:23.355201 1620518 cri.go:89] found id: ""
	I1209 04:43:23.355215 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.355222 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:23.355227 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:23.355289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:23.380238 1620518 cri.go:89] found id: ""
	I1209 04:43:23.380251 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.380258 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:23.380263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:23.380322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:23.409750 1620518 cri.go:89] found id: ""
	I1209 04:43:23.409764 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.409771 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:23.409776 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:23.409838 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:23.437575 1620518 cri.go:89] found id: ""
	I1209 04:43:23.437588 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.437595 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:23.437600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:23.437657 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:23.464403 1620518 cri.go:89] found id: ""
	I1209 04:43:23.464418 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.464425 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:23.464432 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:23.464444 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:23.479567 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:23.479583 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:23.543433 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:23.543443 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:23.543454 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:23.620689 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:23.620709 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.660232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:23.660249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.230943 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:26.242046 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:26.242107 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:26.269716 1620518 cri.go:89] found id: ""
	I1209 04:43:26.269729 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.269736 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:26.269741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:26.269798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:26.296756 1620518 cri.go:89] found id: ""
	I1209 04:43:26.296771 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.296778 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:26.296783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:26.296844 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:26.325789 1620518 cri.go:89] found id: ""
	I1209 04:43:26.325803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.325810 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:26.325816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:26.325878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:26.362024 1620518 cri.go:89] found id: ""
	I1209 04:43:26.362037 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.362044 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:26.362049 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:26.362105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:26.389037 1620518 cri.go:89] found id: ""
	I1209 04:43:26.389051 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.389058 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:26.389063 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:26.389123 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:26.416773 1620518 cri.go:89] found id: ""
	I1209 04:43:26.416787 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.416794 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:26.416799 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:26.416854 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:26.442294 1620518 cri.go:89] found id: ""
	I1209 04:43:26.442308 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.442315 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:26.442323 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:26.442334 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.508604 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:26.508623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:26.523993 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:26.524013 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:26.599795 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:26.599816 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:26.599829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:26.676981 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:26.677003 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:29.206372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:29.216486 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:29.216547 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:29.241737 1620518 cri.go:89] found id: ""
	I1209 04:43:29.241752 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.241759 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:29.241764 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:29.241819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:29.275909 1620518 cri.go:89] found id: ""
	I1209 04:43:29.275922 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.275929 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:29.275935 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:29.275993 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:29.300470 1620518 cri.go:89] found id: ""
	I1209 04:43:29.300483 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.300490 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:29.300495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:29.300552 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:29.326081 1620518 cri.go:89] found id: ""
	I1209 04:43:29.326094 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.326101 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:29.326106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:29.326166 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:29.353323 1620518 cri.go:89] found id: ""
	I1209 04:43:29.353337 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.353344 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:29.353349 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:29.353414 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:29.378490 1620518 cri.go:89] found id: ""
	I1209 04:43:29.378505 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.378512 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:29.378517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:29.378599 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:29.404558 1620518 cri.go:89] found id: ""
	I1209 04:43:29.404571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.404578 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:29.404585 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:29.404595 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:29.470257 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:29.470277 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:29.485347 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:29.485368 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:29.550659 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:29.550676 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:29.550687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:29.628618 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:29.628639 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:32.159988 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:32.170169 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:32.170227 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:32.195475 1620518 cri.go:89] found id: ""
	I1209 04:43:32.195489 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.195496 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:32.195502 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:32.195558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:32.221067 1620518 cri.go:89] found id: ""
	I1209 04:43:32.221080 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.221088 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:32.221093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:32.221160 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:32.247302 1620518 cri.go:89] found id: ""
	I1209 04:43:32.247315 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.247322 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:32.247327 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:32.247388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:32.273214 1620518 cri.go:89] found id: ""
	I1209 04:43:32.273227 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.273234 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:32.273239 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:32.273296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:32.301827 1620518 cri.go:89] found id: ""
	I1209 04:43:32.301842 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.301849 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:32.301855 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:32.301920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:32.327504 1620518 cri.go:89] found id: ""
	I1209 04:43:32.327518 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.327526 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:32.327531 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:32.327592 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:32.354211 1620518 cri.go:89] found id: ""
	I1209 04:43:32.354225 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.354232 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:32.354240 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:32.354251 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:32.424906 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:32.424926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:32.440380 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:32.440396 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:32.508486 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:32.508496 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:32.508506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:32.577521 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:32.577541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:35.111262 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:35.121574 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:35.121636 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:35.147108 1620518 cri.go:89] found id: ""
	I1209 04:43:35.147121 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.147128 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:35.147134 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:35.147193 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:35.172557 1620518 cri.go:89] found id: ""
	I1209 04:43:35.172571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.172578 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:35.172583 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:35.172644 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:35.200994 1620518 cri.go:89] found id: ""
	I1209 04:43:35.201007 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.201020 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:35.201025 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:35.201082 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:35.230443 1620518 cri.go:89] found id: ""
	I1209 04:43:35.230457 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.230470 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:35.230476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:35.230536 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:35.255703 1620518 cri.go:89] found id: ""
	I1209 04:43:35.255716 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.255723 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:35.255728 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:35.255786 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:35.281749 1620518 cri.go:89] found id: ""
	I1209 04:43:35.281762 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.281780 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:35.281786 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:35.281852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:35.306677 1620518 cri.go:89] found id: ""
	I1209 04:43:35.306690 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.306697 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:35.306705 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:35.306715 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:35.375938 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:35.375957 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:35.390955 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:35.390984 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:35.457222 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:35.457240 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:35.457252 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:35.526131 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:35.526150 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:38.057096 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:38.068039 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:38.068101 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:38.097645 1620518 cri.go:89] found id: ""
	I1209 04:43:38.097659 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.097666 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:38.097672 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:38.097730 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:38.125024 1620518 cri.go:89] found id: ""
	I1209 04:43:38.125038 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.125045 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:38.125051 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:38.125106 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:38.158551 1620518 cri.go:89] found id: ""
	I1209 04:43:38.158565 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.158597 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:38.158602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:38.158667 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:38.185732 1620518 cri.go:89] found id: ""
	I1209 04:43:38.185746 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.185753 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:38.185758 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:38.185817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:38.211917 1620518 cri.go:89] found id: ""
	I1209 04:43:38.211931 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.211938 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:38.211944 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:38.212003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:38.242391 1620518 cri.go:89] found id: ""
	I1209 04:43:38.242407 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.242414 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:38.242420 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:38.242495 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:38.268565 1620518 cri.go:89] found id: ""
	I1209 04:43:38.268598 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.268606 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:38.268616 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:38.268628 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:38.335336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:38.335355 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:38.350651 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:38.350667 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:38.413931 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:38.413941 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:38.413952 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:38.481874 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:38.481894 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.013724 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:41.024462 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:41.024521 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:41.050950 1620518 cri.go:89] found id: ""
	I1209 04:43:41.050965 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.050973 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:41.050979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:41.051050 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:41.080781 1620518 cri.go:89] found id: ""
	I1209 04:43:41.080794 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.080801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:41.080806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:41.080864 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:41.107039 1620518 cri.go:89] found id: ""
	I1209 04:43:41.107053 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.107059 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:41.107064 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:41.107122 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:41.131302 1620518 cri.go:89] found id: ""
	I1209 04:43:41.131316 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.131323 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:41.131328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:41.131387 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:41.160541 1620518 cri.go:89] found id: ""
	I1209 04:43:41.160554 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.160560 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:41.160566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:41.160623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:41.189715 1620518 cri.go:89] found id: ""
	I1209 04:43:41.189728 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.189735 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:41.189741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:41.189798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:41.215532 1620518 cri.go:89] found id: ""
	I1209 04:43:41.215545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.215552 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:41.215559 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:41.215570 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.248230 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:41.248245 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:41.316564 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:41.316589 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:41.332031 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:41.332048 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:41.399707 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:41.399720 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:41.399733 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:43.973310 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:43.983577 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:43.983641 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:44.019270 1620518 cri.go:89] found id: ""
	I1209 04:43:44.019285 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.019292 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:44.019298 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:44.019362 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:44.046326 1620518 cri.go:89] found id: ""
	I1209 04:43:44.046340 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.046347 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:44.046353 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:44.046416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:44.073718 1620518 cri.go:89] found id: ""
	I1209 04:43:44.073732 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.073739 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:44.073745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:44.073806 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:44.099804 1620518 cri.go:89] found id: ""
	I1209 04:43:44.099818 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.099825 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:44.099830 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:44.099888 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:44.125332 1620518 cri.go:89] found id: ""
	I1209 04:43:44.125346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.125353 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:44.125358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:44.125418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:44.153398 1620518 cri.go:89] found id: ""
	I1209 04:43:44.153413 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.153420 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:44.153438 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:44.153501 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:44.181868 1620518 cri.go:89] found id: ""
	I1209 04:43:44.181882 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.181889 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:44.181909 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:44.181919 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:44.197827 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:44.197843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:44.262818 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:44.262829 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:44.262840 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:44.331403 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:44.331423 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:44.363934 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:44.363951 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:46.935826 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:46.946383 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:46.946442 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:46.972025 1620518 cri.go:89] found id: ""
	I1209 04:43:46.972039 1620518 logs.go:282] 0 containers: []
	W1209 04:43:46.972046 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:46.972052 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:46.972114 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:47.005389 1620518 cri.go:89] found id: ""
	I1209 04:43:47.005411 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.005428 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:47.005434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:47.005503 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:47.034137 1620518 cri.go:89] found id: ""
	I1209 04:43:47.034151 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.034159 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:47.034164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:47.034224 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:47.060061 1620518 cri.go:89] found id: ""
	I1209 04:43:47.060074 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.060081 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:47.060086 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:47.060155 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:47.087325 1620518 cri.go:89] found id: ""
	I1209 04:43:47.087339 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.087346 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:47.087351 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:47.087412 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:47.113243 1620518 cri.go:89] found id: ""
	I1209 04:43:47.113257 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.113265 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:47.113271 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:47.113333 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:47.139697 1620518 cri.go:89] found id: ""
	I1209 04:43:47.139710 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.139718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:47.139725 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:47.139735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:47.208645 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:47.208665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:47.224099 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:47.224118 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:47.291121 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:47.291131 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:47.291143 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:47.360007 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:47.360028 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:49.894321 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:49.904751 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:49.904813 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:49.933138 1620518 cri.go:89] found id: ""
	I1209 04:43:49.933152 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.933160 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:49.933165 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:49.933223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:49.959143 1620518 cri.go:89] found id: ""
	I1209 04:43:49.959156 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.959163 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:49.959174 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:49.959231 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:49.984103 1620518 cri.go:89] found id: ""
	I1209 04:43:49.984118 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.984125 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:49.984130 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:49.984188 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:50.019299 1620518 cri.go:89] found id: ""
	I1209 04:43:50.019314 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.019322 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:50.019328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:50.019394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:50.050759 1620518 cri.go:89] found id: ""
	I1209 04:43:50.050773 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.050780 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:50.050785 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:50.050852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:50.077915 1620518 cri.go:89] found id: ""
	I1209 04:43:50.077929 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.077937 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:50.077942 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:50.078003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:50.105340 1620518 cri.go:89] found id: ""
	I1209 04:43:50.105354 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.105361 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:50.105369 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:50.105382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:50.176940 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:50.176950 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:50.176961 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:50.250014 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:50.250035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:50.279274 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:50.279290 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:50.344336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:50.344354 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:52.861162 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:52.873255 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:52.873331 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:52.901728 1620518 cri.go:89] found id: ""
	I1209 04:43:52.901743 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.901750 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:52.901756 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:52.901847 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:52.927167 1620518 cri.go:89] found id: ""
	I1209 04:43:52.927180 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.927187 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:52.927192 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:52.927252 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:52.953243 1620518 cri.go:89] found id: ""
	I1209 04:43:52.953256 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.953263 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:52.953268 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:52.953326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:52.981127 1620518 cri.go:89] found id: ""
	I1209 04:43:52.981140 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.981147 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:52.981152 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:52.981210 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:53.014584 1620518 cri.go:89] found id: ""
	I1209 04:43:53.014600 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.014608 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:53.014613 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:53.014681 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:53.041932 1620518 cri.go:89] found id: ""
	I1209 04:43:53.041946 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.041954 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:53.041960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:53.042027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:53.068705 1620518 cri.go:89] found id: ""
	I1209 04:43:53.068719 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.068725 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:53.068733 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:53.068749 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:53.097490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:53.097506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:53.162858 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:53.162879 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:53.177170 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:53.177185 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:53.240297 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:53.240307 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:53.240320 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:55.810542 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:55.820923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:55.820985 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:55.860409 1620518 cri.go:89] found id: ""
	I1209 04:43:55.860422 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.860429 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:55.860434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:55.860491 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:55.895639 1620518 cri.go:89] found id: ""
	I1209 04:43:55.895653 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.895660 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:55.895665 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:55.895729 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:55.922274 1620518 cri.go:89] found id: ""
	I1209 04:43:55.922289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.922297 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:55.922302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:55.922366 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:55.948415 1620518 cri.go:89] found id: ""
	I1209 04:43:55.948437 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.948444 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:55.948448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:55.948509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:55.977442 1620518 cri.go:89] found id: ""
	I1209 04:43:55.977456 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.977463 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:55.977468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:55.977525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:56.006812 1620518 cri.go:89] found id: ""
	I1209 04:43:56.006827 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.006835 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:56.006841 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:56.006920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:56.035113 1620518 cri.go:89] found id: ""
	I1209 04:43:56.035128 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.035135 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:56.035143 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:56.035161 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:56.108405 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:56.108424 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:56.108435 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:56.178263 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:56.178284 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:56.211498 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:56.211513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:56.278845 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:56.278867 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:58.794283 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:58.804745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:58.804805 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:58.836463 1620518 cri.go:89] found id: ""
	I1209 04:43:58.836482 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.836489 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:58.836494 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:58.836551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:58.871008 1620518 cri.go:89] found id: ""
	I1209 04:43:58.871021 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.871028 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:58.871033 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:58.871096 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:58.904275 1620518 cri.go:89] found id: ""
	I1209 04:43:58.904289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.904296 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:58.904301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:58.904363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:58.934333 1620518 cri.go:89] found id: ""
	I1209 04:43:58.934346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.934353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:58.934361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:58.934418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:58.961476 1620518 cri.go:89] found id: ""
	I1209 04:43:58.961490 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.961497 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:58.961503 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:58.961562 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:58.987249 1620518 cri.go:89] found id: ""
	I1209 04:43:58.987263 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.987270 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:58.987276 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:58.987335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:59.015314 1620518 cri.go:89] found id: ""
	I1209 04:43:59.015328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:59.015335 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:59.015342 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:59.015353 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:59.079415 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:59.079425 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:59.079436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:59.150742 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:59.150761 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:59.180649 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:59.180665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:59.248002 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:59.248020 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:01.763804 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:01.774240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:01.774302 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:01.802786 1620518 cri.go:89] found id: ""
	I1209 04:44:01.802800 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.802808 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:01.802813 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:01.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:01.842779 1620518 cri.go:89] found id: ""
	I1209 04:44:01.842794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.842801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:01.842806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:01.842867 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:01.874062 1620518 cri.go:89] found id: ""
	I1209 04:44:01.874081 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.874088 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:01.874093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:01.874157 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:01.903692 1620518 cri.go:89] found id: ""
	I1209 04:44:01.903706 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.903713 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:01.903718 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:01.903777 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:01.933430 1620518 cri.go:89] found id: ""
	I1209 04:44:01.933444 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.933451 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:01.933456 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:01.933515 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:01.961286 1620518 cri.go:89] found id: ""
	I1209 04:44:01.961300 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.961307 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:01.961313 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:01.961373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:01.990521 1620518 cri.go:89] found id: ""
	I1209 04:44:01.990535 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.990542 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:01.990550 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:01.990561 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:02.008959 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:02.008977 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:02.076349 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:02.076359 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:02.076370 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:02.144940 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:02.144960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:02.175776 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:02.175793 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:04.751592 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:04.762232 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:04.762298 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:04.788096 1620518 cri.go:89] found id: ""
	I1209 04:44:04.788110 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.788117 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:04.788122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:04.788184 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:04.829955 1620518 cri.go:89] found id: ""
	I1209 04:44:04.829969 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.829975 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:04.829981 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:04.830037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:04.869304 1620518 cri.go:89] found id: ""
	I1209 04:44:04.869318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.869325 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:04.869330 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:04.869389 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:04.900033 1620518 cri.go:89] found id: ""
	I1209 04:44:04.900048 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.900054 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:04.900060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:04.900118 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:04.926358 1620518 cri.go:89] found id: ""
	I1209 04:44:04.926373 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.926381 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:04.926386 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:04.926446 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:04.952219 1620518 cri.go:89] found id: ""
	I1209 04:44:04.952233 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.952240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:04.952245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:04.952318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:04.981606 1620518 cri.go:89] found id: ""
	I1209 04:44:04.981633 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.981640 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:04.981648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:04.981659 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:05.054363 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:05.054374 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:05.054384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:05.123486 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:05.123508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:05.153591 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:05.153609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:05.220156 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:05.220176 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:07.735728 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:07.746784 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:07.746849 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:07.773633 1620518 cri.go:89] found id: ""
	I1209 04:44:07.773646 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.773653 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:07.773658 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:07.773714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:07.799209 1620518 cri.go:89] found id: ""
	I1209 04:44:07.799222 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.799230 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:07.799235 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:07.799289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:07.833034 1620518 cri.go:89] found id: ""
	I1209 04:44:07.833047 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.833055 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:07.833060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:07.833117 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:07.861960 1620518 cri.go:89] found id: ""
	I1209 04:44:07.861979 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.861986 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:07.861991 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:07.862048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:07.891370 1620518 cri.go:89] found id: ""
	I1209 04:44:07.891384 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.891392 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:07.891398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:07.891499 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:07.925093 1620518 cri.go:89] found id: ""
	I1209 04:44:07.925106 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.925113 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:07.925119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:07.925179 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:07.953814 1620518 cri.go:89] found id: ""
	I1209 04:44:07.953828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.953845 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:07.953853 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:07.953863 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:08.019480 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:08.019500 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:08.035405 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:08.035420 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:08.103942 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:08.103951 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:08.103964 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:08.173425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:08.173447 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:10.707757 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:10.717859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:10.717922 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:10.743691 1620518 cri.go:89] found id: ""
	I1209 04:44:10.743705 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.743712 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:10.743717 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:10.743775 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:10.769622 1620518 cri.go:89] found id: ""
	I1209 04:44:10.769636 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.769643 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:10.769648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:10.769707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:10.802785 1620518 cri.go:89] found id: ""
	I1209 04:44:10.802798 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.802806 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:10.802811 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:10.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:10.833564 1620518 cri.go:89] found id: ""
	I1209 04:44:10.833579 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.833587 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:10.833592 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:10.833655 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:10.876749 1620518 cri.go:89] found id: ""
	I1209 04:44:10.876763 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.876770 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:10.876775 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:10.876832 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:10.907080 1620518 cri.go:89] found id: ""
	I1209 04:44:10.907093 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.907101 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:10.907106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:10.907164 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:10.932888 1620518 cri.go:89] found id: ""
	I1209 04:44:10.932903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.932910 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:10.932918 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:10.932928 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:10.998090 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:10.998113 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:11.016501 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:11.016518 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:11.083628 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:11.083645 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:11.083658 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:11.151855 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:11.151878 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:13.684470 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:13.694706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:13.694766 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:13.720935 1620518 cri.go:89] found id: ""
	I1209 04:44:13.720948 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.720955 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:13.720960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:13.721016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:13.750286 1620518 cri.go:89] found id: ""
	I1209 04:44:13.750299 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.750306 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:13.750314 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:13.750372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:13.774808 1620518 cri.go:89] found id: ""
	I1209 04:44:13.774822 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.774831 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:13.774836 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:13.774909 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:13.800153 1620518 cri.go:89] found id: ""
	I1209 04:44:13.800167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.800174 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:13.800180 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:13.800237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:13.833377 1620518 cri.go:89] found id: ""
	I1209 04:44:13.833402 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.833409 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:13.833415 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:13.833487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:13.863754 1620518 cri.go:89] found id: ""
	I1209 04:44:13.863767 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.863774 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:13.863780 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:13.863836 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:13.899969 1620518 cri.go:89] found id: ""
	I1209 04:44:13.899983 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.899990 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:13.899997 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:13.900008 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:13.964963 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:13.964983 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:13.980119 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:13.980136 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:14.051622 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:14.051632 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:14.051644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:14.120152 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:14.120171 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:16.651342 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:16.661695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:16.661769 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:16.688695 1620518 cri.go:89] found id: ""
	I1209 04:44:16.688709 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.688717 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:16.688724 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:16.688783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:16.714481 1620518 cri.go:89] found id: ""
	I1209 04:44:16.714495 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.714502 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:16.714507 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:16.714563 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:16.740966 1620518 cri.go:89] found id: ""
	I1209 04:44:16.740980 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.740987 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:16.740992 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:16.741048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:16.772332 1620518 cri.go:89] found id: ""
	I1209 04:44:16.772346 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.772353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:16.772358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:16.772429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:16.800889 1620518 cri.go:89] found id: ""
	I1209 04:44:16.800903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.800910 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:16.800916 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:16.800979 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:16.836688 1620518 cri.go:89] found id: ""
	I1209 04:44:16.836702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.836709 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:16.836715 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:16.836779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:16.877225 1620518 cri.go:89] found id: ""
	I1209 04:44:16.877238 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.877245 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:16.877253 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:16.877263 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:16.947272 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:16.947292 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:16.964059 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:16.964075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:17.033163 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:17.033172 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:17.033183 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:17.101285 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:17.101306 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:19.635736 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:19.645923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:19.645987 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:19.671893 1620518 cri.go:89] found id: ""
	I1209 04:44:19.671907 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.671913 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:19.671918 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:19.671975 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:19.697145 1620518 cri.go:89] found id: ""
	I1209 04:44:19.697159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.697166 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:19.697171 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:19.697228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:19.726050 1620518 cri.go:89] found id: ""
	I1209 04:44:19.726064 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.726072 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:19.726077 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:19.726135 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:19.753277 1620518 cri.go:89] found id: ""
	I1209 04:44:19.753290 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.753297 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:19.753302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:19.753364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:19.778375 1620518 cri.go:89] found id: ""
	I1209 04:44:19.778388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.778395 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:19.778410 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:19.778483 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:19.803668 1620518 cri.go:89] found id: ""
	I1209 04:44:19.803682 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.803690 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:19.803695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:19.803757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:19.841128 1620518 cri.go:89] found id: ""
	I1209 04:44:19.841142 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.841149 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:19.841157 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:19.841167 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:19.917953 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:19.917972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:19.933437 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:19.933455 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:20.001189 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:20.001200 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:20.001214 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:20.072973 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:20.072992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:22.607218 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:22.618312 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:22.618373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:22.644573 1620518 cri.go:89] found id: ""
	I1209 04:44:22.644587 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.644594 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:22.644600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:22.644669 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:22.671737 1620518 cri.go:89] found id: ""
	I1209 04:44:22.671751 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.671758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:22.671763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:22.671819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:22.697372 1620518 cri.go:89] found id: ""
	I1209 04:44:22.697386 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.697393 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:22.697398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:22.697456 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:22.724412 1620518 cri.go:89] found id: ""
	I1209 04:44:22.724428 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.724436 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:22.724448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:22.724512 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:22.757520 1620518 cri.go:89] found id: ""
	I1209 04:44:22.757533 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.757551 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:22.757556 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:22.757623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:22.787926 1620518 cri.go:89] found id: ""
	I1209 04:44:22.787939 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.787946 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:22.787951 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:22.788014 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:22.813253 1620518 cri.go:89] found id: ""
	I1209 04:44:22.813267 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.813284 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:22.813292 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:22.813303 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:22.889757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:22.889776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:22.905834 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:22.905850 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:22.976939 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:22.976949 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:22.976960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:23.044862 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:23.044881 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:25.578382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:25.589220 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:25.589287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:25.617854 1620518 cri.go:89] found id: ""
	I1209 04:44:25.617868 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.617875 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:25.617880 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:25.617937 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:25.642864 1620518 cri.go:89] found id: ""
	I1209 04:44:25.642883 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.642890 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:25.642895 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:25.642952 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:25.670199 1620518 cri.go:89] found id: ""
	I1209 04:44:25.670213 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.670220 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:25.670225 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:25.670283 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:25.697688 1620518 cri.go:89] found id: ""
	I1209 04:44:25.697702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.697720 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:25.697725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:25.697827 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:25.723203 1620518 cri.go:89] found id: ""
	I1209 04:44:25.723218 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.723225 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:25.723230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:25.723287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:25.752776 1620518 cri.go:89] found id: ""
	I1209 04:44:25.752790 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.752798 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:25.752803 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:25.752866 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:25.778450 1620518 cri.go:89] found id: ""
	I1209 04:44:25.778474 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.778483 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:25.778490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:25.778501 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:25.846732 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:25.846750 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:25.863685 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:25.863701 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:25.940317 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:25.940328 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:25.940339 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:26.013087 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:26.013109 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.543111 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:28.553653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:28.553717 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:28.579452 1620518 cri.go:89] found id: ""
	I1209 04:44:28.579465 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.579472 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:28.579478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:28.579542 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:28.605894 1620518 cri.go:89] found id: ""
	I1209 04:44:28.605909 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.605916 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:28.605921 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:28.605983 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:28.633021 1620518 cri.go:89] found id: ""
	I1209 04:44:28.633044 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.633051 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:28.633057 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:28.633129 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:28.657926 1620518 cri.go:89] found id: ""
	I1209 04:44:28.657946 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.657953 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:28.657959 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:28.658027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:28.685339 1620518 cri.go:89] found id: ""
	I1209 04:44:28.685353 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.685360 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:28.685366 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:28.685433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:28.718472 1620518 cri.go:89] found id: ""
	I1209 04:44:28.718485 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.718492 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:28.718498 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:28.718554 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:28.748502 1620518 cri.go:89] found id: ""
	I1209 04:44:28.748516 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.748523 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:28.748531 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:28.748543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:28.763578 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:28.763594 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:28.830210 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:28.830220 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:28.830231 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:28.905378 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:28.905401 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.934445 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:28.934466 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.501091 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:31.511589 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:31.511662 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:31.537954 1620518 cri.go:89] found id: ""
	I1209 04:44:31.537967 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.537974 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:31.537979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:31.538035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:31.563399 1620518 cri.go:89] found id: ""
	I1209 04:44:31.563412 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.563419 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:31.563424 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:31.563481 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:31.590727 1620518 cri.go:89] found id: ""
	I1209 04:44:31.590741 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.590748 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:31.590753 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:31.590817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:31.619991 1620518 cri.go:89] found id: ""
	I1209 04:44:31.620004 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.620012 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:31.620017 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:31.620073 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:31.646682 1620518 cri.go:89] found id: ""
	I1209 04:44:31.646695 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.646703 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:31.646709 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:31.646783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:31.676240 1620518 cri.go:89] found id: ""
	I1209 04:44:31.676254 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.676261 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:31.676266 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:31.676324 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:31.701874 1620518 cri.go:89] found id: ""
	I1209 04:44:31.701898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.701906 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:31.701914 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:31.701924 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:31.729913 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:31.729929 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.795202 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:31.795222 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:31.810455 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:31.810471 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:31.910056 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:31.910067 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:31.910079 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.486956 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:34.497309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:34.497372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:34.523236 1620518 cri.go:89] found id: ""
	I1209 04:44:34.523250 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.523257 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:34.523262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:34.523320 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:34.549906 1620518 cri.go:89] found id: ""
	I1209 04:44:34.549920 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.549935 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:34.549940 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:34.549997 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:34.577694 1620518 cri.go:89] found id: ""
	I1209 04:44:34.577708 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.577716 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:34.577721 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:34.577781 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:34.604297 1620518 cri.go:89] found id: ""
	I1209 04:44:34.604311 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.604319 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:34.604325 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:34.604388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:34.629233 1620518 cri.go:89] found id: ""
	I1209 04:44:34.629249 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.629257 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:34.629262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:34.629330 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:34.659380 1620518 cri.go:89] found id: ""
	I1209 04:44:34.659394 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.659401 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:34.659407 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:34.659466 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:34.688342 1620518 cri.go:89] found id: ""
	I1209 04:44:34.688356 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.688363 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:34.688370 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:34.688383 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:34.703538 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:34.703555 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:34.766893 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:34.766907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:34.766925 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.835016 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:34.835035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:34.867468 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:34.867484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.441777 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:37.452150 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:37.452220 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:37.477442 1620518 cri.go:89] found id: ""
	I1209 04:44:37.477456 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.477463 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:37.477468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:37.477525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:37.503669 1620518 cri.go:89] found id: ""
	I1209 04:44:37.503683 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.503690 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:37.503696 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:37.503756 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:37.529304 1620518 cri.go:89] found id: ""
	I1209 04:44:37.529318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.529326 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:37.529331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:37.529388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:37.555509 1620518 cri.go:89] found id: ""
	I1209 04:44:37.555523 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.555539 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:37.555545 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:37.555603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:37.581297 1620518 cri.go:89] found id: ""
	I1209 04:44:37.581310 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.581328 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:37.581334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:37.581403 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:37.607757 1620518 cri.go:89] found id: ""
	I1209 04:44:37.607774 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.607781 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:37.607787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:37.607863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:37.634135 1620518 cri.go:89] found id: ""
	I1209 04:44:37.634159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.634167 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:37.634174 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:37.634187 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:37.698412 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:37.698423 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:37.698434 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:37.765691 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:37.765711 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:37.794807 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:37.794822 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.865591 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:37.865609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.382843 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:40.393026 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:40.393086 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:40.417900 1620518 cri.go:89] found id: ""
	I1209 04:44:40.417913 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.417920 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:40.417926 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:40.417984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:40.447221 1620518 cri.go:89] found id: ""
	I1209 04:44:40.447235 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.447242 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:40.447247 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:40.447305 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:40.472564 1620518 cri.go:89] found id: ""
	I1209 04:44:40.472578 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.472585 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:40.472591 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:40.472651 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:40.498097 1620518 cri.go:89] found id: ""
	I1209 04:44:40.498111 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.498118 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:40.498123 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:40.498182 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:40.523258 1620518 cri.go:89] found id: ""
	I1209 04:44:40.523271 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.523279 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:40.523287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:40.523343 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:40.548390 1620518 cri.go:89] found id: ""
	I1209 04:44:40.548404 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.548411 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:40.548417 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:40.548475 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:40.573171 1620518 cri.go:89] found id: ""
	I1209 04:44:40.573185 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.573192 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:40.573199 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:40.573211 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.587922 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:40.587937 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:40.648925 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:40.648934 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:40.648945 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:40.721024 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:40.721047 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:40.756647 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:40.756664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.325607 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:43.335615 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:43.335677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:43.365344 1620518 cri.go:89] found id: ""
	I1209 04:44:43.365360 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.365367 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:43.365373 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:43.365432 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:43.391751 1620518 cri.go:89] found id: ""
	I1209 04:44:43.391764 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.391772 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:43.391783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:43.391843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:43.417345 1620518 cri.go:89] found id: ""
	I1209 04:44:43.417359 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.417366 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:43.417372 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:43.417433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:43.444314 1620518 cri.go:89] found id: ""
	I1209 04:44:43.444328 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.444335 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:43.444341 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:43.444402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:43.473635 1620518 cri.go:89] found id: ""
	I1209 04:44:43.473649 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.473656 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:43.473661 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:43.473721 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:43.499726 1620518 cri.go:89] found id: ""
	I1209 04:44:43.499740 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.499747 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:43.499752 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:43.499812 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:43.526373 1620518 cri.go:89] found id: ""
	I1209 04:44:43.526388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.526396 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:43.526404 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:43.526415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.591625 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:43.591644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:43.606802 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:43.606818 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:43.671535 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:43.671545 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:43.671556 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:43.742830 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:43.742849 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.272131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:46.282533 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:46.282611 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:46.307629 1620518 cri.go:89] found id: ""
	I1209 04:44:46.307644 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.307652 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:46.307657 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:46.307718 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:46.334241 1620518 cri.go:89] found id: ""
	I1209 04:44:46.334255 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.334262 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:46.334267 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:46.334326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:46.360606 1620518 cri.go:89] found id: ""
	I1209 04:44:46.360619 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.360627 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:46.360632 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:46.360693 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:46.391930 1620518 cri.go:89] found id: ""
	I1209 04:44:46.391944 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.391951 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:46.391956 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:46.392018 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:46.418088 1620518 cri.go:89] found id: ""
	I1209 04:44:46.418102 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.418109 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:46.418114 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:46.418173 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:46.444114 1620518 cri.go:89] found id: ""
	I1209 04:44:46.444129 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.444135 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:46.444141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:46.444202 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:46.469066 1620518 cri.go:89] found id: ""
	I1209 04:44:46.469079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.469096 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:46.469105 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:46.469116 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:46.535118 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:46.535128 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:46.535140 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:46.603490 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:46.603513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.633565 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:46.633582 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:46.707757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:46.707778 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.223668 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:49.233804 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:49.233863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:49.262060 1620518 cri.go:89] found id: ""
	I1209 04:44:49.262074 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.262081 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:49.262087 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:49.262146 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:49.288289 1620518 cri.go:89] found id: ""
	I1209 04:44:49.288303 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.288310 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:49.288315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:49.288372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:49.317469 1620518 cri.go:89] found id: ""
	I1209 04:44:49.317482 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.317489 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:49.317495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:49.317553 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:49.343598 1620518 cri.go:89] found id: ""
	I1209 04:44:49.343612 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.343619 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:49.343624 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:49.343682 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:49.369884 1620518 cri.go:89] found id: ""
	I1209 04:44:49.369898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.369905 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:49.369910 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:49.369968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:49.397485 1620518 cri.go:89] found id: ""
	I1209 04:44:49.397499 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.397506 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:49.397512 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:49.397576 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:49.426780 1620518 cri.go:89] found id: ""
	I1209 04:44:49.426794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.426802 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:49.426810 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:49.426820 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:49.455508 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:49.455524 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:49.521613 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:49.521632 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.537098 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:49.537115 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:49.604403 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:49.604415 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:49.604427 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.175474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:52.185416 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:52.185490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:52.210165 1620518 cri.go:89] found id: ""
	I1209 04:44:52.210179 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.210186 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:52.210191 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:52.210250 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:52.235252 1620518 cri.go:89] found id: ""
	I1209 04:44:52.235265 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.235272 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:52.235277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:52.235335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:52.260814 1620518 cri.go:89] found id: ""
	I1209 04:44:52.260828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.260835 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:52.260840 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:52.260899 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:52.287596 1620518 cri.go:89] found id: ""
	I1209 04:44:52.287609 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.287616 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:52.287621 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:52.287677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:52.315049 1620518 cri.go:89] found id: ""
	I1209 04:44:52.315062 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.315069 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:52.315075 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:52.315139 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:52.339741 1620518 cri.go:89] found id: ""
	I1209 04:44:52.339755 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.339762 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:52.339767 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:52.339825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:52.369959 1620518 cri.go:89] found id: ""
	I1209 04:44:52.369973 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.369981 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:52.369988 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:52.369998 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:52.442787 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:52.442797 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:52.442807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.511615 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:52.511634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:52.542801 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:52.542817 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:52.608882 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:52.608904 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.125120 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:55.135789 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:55.135848 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:55.162401 1620518 cri.go:89] found id: ""
	I1209 04:44:55.162416 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.162423 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:55.162428 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:55.162487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:55.190716 1620518 cri.go:89] found id: ""
	I1209 04:44:55.190730 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.190736 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:55.190742 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:55.190799 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:55.216812 1620518 cri.go:89] found id: ""
	I1209 04:44:55.216825 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.216832 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:55.216839 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:55.216896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:55.241064 1620518 cri.go:89] found id: ""
	I1209 04:44:55.241079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.241086 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:55.241092 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:55.241148 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:55.270237 1620518 cri.go:89] found id: ""
	I1209 04:44:55.270251 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.270258 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:55.270263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:55.270322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:55.296228 1620518 cri.go:89] found id: ""
	I1209 04:44:55.296242 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.296249 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:55.296254 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:55.296315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:55.322153 1620518 cri.go:89] found id: ""
	I1209 04:44:55.322167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.322174 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:55.322181 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:55.322192 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:55.390665 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:55.390684 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.405506 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:55.405523 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:55.471951 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:55.471960 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:55.471972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:55.542641 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:55.542662 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.078721 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:58.089961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:58.090029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:58.117883 1620518 cri.go:89] found id: ""
	I1209 04:44:58.117896 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.117902 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:58.117908 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:58.117968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:58.150212 1620518 cri.go:89] found id: ""
	I1209 04:44:58.150226 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.150233 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:58.150238 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:58.150296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:58.177448 1620518 cri.go:89] found id: ""
	I1209 04:44:58.177462 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.177469 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:58.177474 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:58.177533 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:58.203663 1620518 cri.go:89] found id: ""
	I1209 04:44:58.203676 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.203683 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:58.203688 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:58.203779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:58.229153 1620518 cri.go:89] found id: ""
	I1209 04:44:58.229167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.229174 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:58.229179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:58.229237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:58.253337 1620518 cri.go:89] found id: ""
	I1209 04:44:58.253365 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.253372 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:58.253377 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:58.253433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:58.279202 1620518 cri.go:89] found id: ""
	I1209 04:44:58.279215 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.279222 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:58.279230 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:58.279240 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:58.352607 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:58.352626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.380559 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:58.380575 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:58.450340 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:58.450359 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:58.466733 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:58.466753 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:58.539538 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.039807 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:01.051635 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:01.051699 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:01.081094 1620518 cri.go:89] found id: ""
	I1209 04:45:01.081120 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.081132 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:01.081138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:01.081216 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:01.110254 1620518 cri.go:89] found id: ""
	I1209 04:45:01.110270 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.110277 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:01.110282 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:01.110348 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:01.142200 1620518 cri.go:89] found id: ""
	I1209 04:45:01.142217 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.142224 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:01.142230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:01.142295 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:01.173624 1620518 cri.go:89] found id: ""
	I1209 04:45:01.173640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.173647 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:01.173653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:01.173714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:01.200655 1620518 cri.go:89] found id: ""
	I1209 04:45:01.200669 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.200676 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:01.200681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:01.200753 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:01.228245 1620518 cri.go:89] found id: ""
	I1209 04:45:01.228260 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.228268 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:01.228274 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:01.228344 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:01.255910 1620518 cri.go:89] found id: ""
	I1209 04:45:01.255924 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.255932 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:01.255941 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:01.255955 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:01.272811 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:01.272829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:01.345905 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.345916 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:01.345926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:01.428612 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:01.428634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:01.462789 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:01.462805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.036441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:04.048197 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:04.048263 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:04.077328 1620518 cri.go:89] found id: ""
	I1209 04:45:04.077347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.077354 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:04.077361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:04.077424 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:04.105221 1620518 cri.go:89] found id: ""
	I1209 04:45:04.105235 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.105243 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:04.105249 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:04.105315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:04.136847 1620518 cri.go:89] found id: ""
	I1209 04:45:04.136860 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.136868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:04.136873 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:04.136934 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:04.167906 1620518 cri.go:89] found id: ""
	I1209 04:45:04.167920 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.167930 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:04.167936 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:04.168012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:04.198111 1620518 cri.go:89] found id: ""
	I1209 04:45:04.198126 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.198133 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:04.198139 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:04.198201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:04.228375 1620518 cri.go:89] found id: ""
	I1209 04:45:04.228389 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.228396 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:04.228402 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:04.228460 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:04.255398 1620518 cri.go:89] found id: ""
	I1209 04:45:04.255411 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.255418 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:04.255425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:04.255436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:04.285882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:04.285898 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.352741 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:04.352763 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:04.369185 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:04.369202 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:04.440688 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:04.440698 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:04.440710 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.013764 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:07.024294 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:07.024356 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:07.050143 1620518 cri.go:89] found id: ""
	I1209 04:45:07.050157 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.050164 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:07.050170 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:07.050240 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:07.076876 1620518 cri.go:89] found id: ""
	I1209 04:45:07.076890 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.076897 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:07.076902 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:07.076957 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:07.102491 1620518 cri.go:89] found id: ""
	I1209 04:45:07.102505 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.102512 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:07.102517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:07.102597 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:07.132406 1620518 cri.go:89] found id: ""
	I1209 04:45:07.132421 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.132428 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:07.132432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:07.132489 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:07.158308 1620518 cri.go:89] found id: ""
	I1209 04:45:07.158322 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.158329 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:07.158334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:07.158394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:07.185219 1620518 cri.go:89] found id: ""
	I1209 04:45:07.185232 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.185240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:07.185245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:07.185304 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:07.211200 1620518 cri.go:89] found id: ""
	I1209 04:45:07.211213 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.211220 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:07.211227 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:07.211239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.279098 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:07.279117 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:07.307654 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:07.307669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:07.380382 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:07.380406 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:07.396198 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:07.396216 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:07.463840 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:09.964491 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:09.974856 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:09.974917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:10.013610 1620518 cri.go:89] found id: ""
	I1209 04:45:10.013627 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.013635 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:10.013641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:10.013710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:10.041923 1620518 cri.go:89] found id: ""
	I1209 04:45:10.041937 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.041945 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:10.041950 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:10.042012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:10.070273 1620518 cri.go:89] found id: ""
	I1209 04:45:10.070287 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.070295 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:10.070306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:10.070365 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:10.101336 1620518 cri.go:89] found id: ""
	I1209 04:45:10.101350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.101357 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:10.101362 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:10.101423 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:10.129685 1620518 cri.go:89] found id: ""
	I1209 04:45:10.129699 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.129706 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:10.129711 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:10.129770 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:10.157137 1620518 cri.go:89] found id: ""
	I1209 04:45:10.157151 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.157158 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:10.157164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:10.157223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:10.186869 1620518 cri.go:89] found id: ""
	I1209 04:45:10.186883 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.186891 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:10.186898 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:10.186912 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:10.217015 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:10.217032 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:10.284415 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:10.284437 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:10.299713 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:10.299729 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:10.383660 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:10.383683 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:10.383695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:12.956212 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:12.967122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:12.967187 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:12.992647 1620518 cri.go:89] found id: ""
	I1209 04:45:12.992661 1620518 logs.go:282] 0 containers: []
	W1209 04:45:12.992667 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:12.992673 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:12.992731 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:13.024601 1620518 cri.go:89] found id: ""
	I1209 04:45:13.024616 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.024623 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:13.024628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:13.024689 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:13.054508 1620518 cri.go:89] found id: ""
	I1209 04:45:13.054522 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.054529 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:13.054534 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:13.054612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:13.080662 1620518 cri.go:89] found id: ""
	I1209 04:45:13.080681 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.080688 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:13.080693 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:13.080750 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:13.112334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.112347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.112354 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:13.112363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:13.112421 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:13.141334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.141348 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.141355 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:13.141360 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:13.141433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:13.166692 1620518 cri.go:89] found id: ""
	I1209 04:45:13.166706 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.166713 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:13.166721 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:13.166735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:13.230693 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:13.230703 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:13.230718 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:13.299665 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:13.299685 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:13.343575 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:13.343591 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:13.418530 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:13.418550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:15.934049 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:15.944397 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:15.944459 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:15.969801 1620518 cri.go:89] found id: ""
	I1209 04:45:15.969814 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.969821 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:15.969827 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:15.969886 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:15.995679 1620518 cri.go:89] found id: ""
	I1209 04:45:15.995693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.995700 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:15.995705 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:15.995761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:16.029078 1620518 cri.go:89] found id: ""
	I1209 04:45:16.029092 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.029100 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:16.029105 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:16.029167 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:16.057686 1620518 cri.go:89] found id: ""
	I1209 04:45:16.057700 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.057707 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:16.057712 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:16.057773 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:16.085790 1620518 cri.go:89] found id: ""
	I1209 04:45:16.085804 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.085811 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:16.085816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:16.085876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:16.112272 1620518 cri.go:89] found id: ""
	I1209 04:45:16.112288 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.112295 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:16.112301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:16.112371 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:16.137697 1620518 cri.go:89] found id: ""
	I1209 04:45:16.137711 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.137718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:16.137726 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:16.137741 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:16.170480 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:16.170495 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:16.235651 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:16.235671 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:16.250648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:16.250664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:16.313079 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:16.313088 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:16.313099 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:18.888938 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:18.899614 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:18.899678 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:18.925762 1620518 cri.go:89] found id: ""
	I1209 04:45:18.925775 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.925782 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:18.925787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:18.925843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:18.952615 1620518 cri.go:89] found id: ""
	I1209 04:45:18.952629 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.952636 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:18.952641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:18.952703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:18.978511 1620518 cri.go:89] found id: ""
	I1209 04:45:18.978525 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.978532 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:18.978537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:18.978620 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:19.007151 1620518 cri.go:89] found id: ""
	I1209 04:45:19.007166 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.007173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:19.007183 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:19.007244 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:19.034621 1620518 cri.go:89] found id: ""
	I1209 04:45:19.034635 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.034643 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:19.034648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:19.034708 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:19.063843 1620518 cri.go:89] found id: ""
	I1209 04:45:19.063856 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.063863 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:19.063868 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:19.063929 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:19.090085 1620518 cri.go:89] found id: ""
	I1209 04:45:19.090099 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.090106 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:19.090114 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:19.090125 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:19.159590 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:19.159614 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:19.159626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:19.228469 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:19.228489 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:19.257518 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:19.257534 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:19.323776 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:19.323796 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:21.846133 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:21.856537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:21.856603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:21.883050 1620518 cri.go:89] found id: ""
	I1209 04:45:21.883071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.883079 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:21.883084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:21.883144 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:21.909529 1620518 cri.go:89] found id: ""
	I1209 04:45:21.909544 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.909551 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:21.909557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:21.909616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:21.935426 1620518 cri.go:89] found id: ""
	I1209 04:45:21.935440 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.935447 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:21.935452 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:21.935513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:21.964269 1620518 cri.go:89] found id: ""
	I1209 04:45:21.964283 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.964290 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:21.964295 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:21.964351 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:21.991621 1620518 cri.go:89] found id: ""
	I1209 04:45:21.991637 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.991644 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:21.991650 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:21.991710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:22.018422 1620518 cri.go:89] found id: ""
	I1209 04:45:22.018437 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.018445 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:22.018450 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:22.018510 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:22.045499 1620518 cri.go:89] found id: ""
	I1209 04:45:22.045514 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.045522 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:22.045529 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:22.045541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:22.111892 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:22.111907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:22.111923 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:22.180045 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:22.180065 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:22.210199 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:22.210215 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:22.276418 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:22.276439 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:24.791989 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:24.802138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:24.802199 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:24.830421 1620518 cri.go:89] found id: ""
	I1209 04:45:24.830434 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.830441 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:24.830446 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:24.830509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:24.855641 1620518 cri.go:89] found id: ""
	I1209 04:45:24.855653 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.855661 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:24.855666 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:24.855723 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:24.882261 1620518 cri.go:89] found id: ""
	I1209 04:45:24.882275 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.882282 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:24.882287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:24.882346 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:24.909451 1620518 cri.go:89] found id: ""
	I1209 04:45:24.909465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.909472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:24.909477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:24.909538 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:24.935023 1620518 cri.go:89] found id: ""
	I1209 04:45:24.935036 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.935043 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:24.935048 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:24.935105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:24.965362 1620518 cri.go:89] found id: ""
	I1209 04:45:24.965375 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.965390 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:24.965396 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:24.965454 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:24.993349 1620518 cri.go:89] found id: ""
	I1209 04:45:24.993362 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.993369 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:24.993377 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:24.993387 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.060817 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.060841 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.077397 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.077415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.149136 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.149146 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:25.149157 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:25.218866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.218886 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:27.749537 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:27.760277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:27.760345 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:27.786623 1620518 cri.go:89] found id: ""
	I1209 04:45:27.786636 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.786643 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:27.786648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:27.786705 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:27.813156 1620518 cri.go:89] found id: ""
	I1209 04:45:27.813169 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.813176 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:27.813181 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:27.813238 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:27.838803 1620518 cri.go:89] found id: ""
	I1209 04:45:27.838817 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.838824 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:27.838835 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:27.838896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:27.865975 1620518 cri.go:89] found id: ""
	I1209 04:45:27.865988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.865996 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:27.866001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:27.866058 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:27.891739 1620518 cri.go:89] found id: ""
	I1209 04:45:27.891753 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.891761 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:27.891766 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:27.891825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:27.922057 1620518 cri.go:89] found id: ""
	I1209 04:45:27.922071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.922079 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:27.922084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:27.922143 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:27.947345 1620518 cri.go:89] found id: ""
	I1209 04:45:27.947359 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.947366 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:27.947373 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:27.947384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:28.018760 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:28.018788 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:28.035483 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:28.035508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:28.104231 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:28.104241 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:28.104253 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:28.173176 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:28.173196 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:30.707635 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:30.717972 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:30.718036 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:30.743336 1620518 cri.go:89] found id: ""
	I1209 04:45:30.743350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.743357 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:30.743363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:30.743420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:30.768727 1620518 cri.go:89] found id: ""
	I1209 04:45:30.768741 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.768748 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:30.768754 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:30.768811 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:30.797959 1620518 cri.go:89] found id: ""
	I1209 04:45:30.797973 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.797980 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:30.797985 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:30.798046 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:30.825422 1620518 cri.go:89] found id: ""
	I1209 04:45:30.825435 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.825442 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:30.825448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:30.825506 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:30.854265 1620518 cri.go:89] found id: ""
	I1209 04:45:30.854278 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.854285 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:30.854290 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:30.854347 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:30.880403 1620518 cri.go:89] found id: ""
	I1209 04:45:30.880418 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.880426 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:30.880432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:30.880494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:30.913767 1620518 cri.go:89] found id: ""
	I1209 04:45:30.913781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.913789 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:30.913796 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:30.913807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:30.980378 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:30.980398 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:30.995822 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:30.995838 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:31.066169 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:31.066179 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:31.066190 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:31.138123 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:31.138142 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:33.670737 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:33.681036 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:33.681099 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:33.709926 1620518 cri.go:89] found id: ""
	I1209 04:45:33.709939 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.709947 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:33.709963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:33.710023 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:33.737554 1620518 cri.go:89] found id: ""
	I1209 04:45:33.737567 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.737574 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:33.737579 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:33.737640 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:33.763709 1620518 cri.go:89] found id: ""
	I1209 04:45:33.763723 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.763731 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:33.763736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:33.763794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:33.792885 1620518 cri.go:89] found id: ""
	I1209 04:45:33.792899 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.792906 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:33.792912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:33.792971 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:33.818657 1620518 cri.go:89] found id: ""
	I1209 04:45:33.818671 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.818678 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:33.818683 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:33.818741 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:33.845152 1620518 cri.go:89] found id: ""
	I1209 04:45:33.845167 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.845174 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:33.845179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:33.845237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:33.871504 1620518 cri.go:89] found id: ""
	I1209 04:45:33.871517 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.871524 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:33.871532 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:33.871543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:33.938353 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:33.938373 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:33.954248 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:33.954267 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:34.025014 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:34.025026 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:34.025038 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:34.096006 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:34.096027 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:36.630302 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:36.640925 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:36.640999 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:36.669961 1620518 cri.go:89] found id: ""
	I1209 04:45:36.669975 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.669982 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:36.669988 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:36.670044 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:36.696918 1620518 cri.go:89] found id: ""
	I1209 04:45:36.696934 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.696942 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:36.696947 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:36.697007 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:36.727113 1620518 cri.go:89] found id: ""
	I1209 04:45:36.727127 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.727136 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:36.727141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:36.727201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:36.752459 1620518 cri.go:89] found id: ""
	I1209 04:45:36.752473 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.752480 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:36.752485 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:36.752543 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:36.778403 1620518 cri.go:89] found id: ""
	I1209 04:45:36.778417 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.778425 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:36.778430 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:36.778488 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:36.809409 1620518 cri.go:89] found id: ""
	I1209 04:45:36.809423 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.809430 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:36.809436 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:36.809494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:36.838444 1620518 cri.go:89] found id: ""
	I1209 04:45:36.838457 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.838464 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:36.838472 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:36.838484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:36.853995 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:36.854011 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:36.919371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:36.919381 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:36.919395 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:36.992004 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:36.992025 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:37.033214 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:37.033230 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.602680 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:39.614476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:39.614537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:39.644626 1620518 cri.go:89] found id: ""
	I1209 04:45:39.644640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.644647 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:39.644652 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:39.644711 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:39.673317 1620518 cri.go:89] found id: ""
	I1209 04:45:39.673331 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.673338 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:39.673343 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:39.673404 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:39.699053 1620518 cri.go:89] found id: ""
	I1209 04:45:39.699067 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.699074 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:39.699079 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:39.699141 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:39.724341 1620518 cri.go:89] found id: ""
	I1209 04:45:39.724355 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.724362 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:39.724370 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:39.724429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:39.749975 1620518 cri.go:89] found id: ""
	I1209 04:45:39.749988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.749995 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:39.750001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:39.750060 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:39.774556 1620518 cri.go:89] found id: ""
	I1209 04:45:39.774588 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.774597 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:39.774602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:39.774663 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:39.800285 1620518 cri.go:89] found id: ""
	I1209 04:45:39.800299 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.800307 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:39.800314 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:39.800325 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:39.830073 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:39.830089 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.898438 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:39.898457 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:39.913743 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:39.913759 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:39.982308 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:39.982319 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:39.982332 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.561378 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:42.571315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:42.571383 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:42.602452 1620518 cri.go:89] found id: ""
	I1209 04:45:42.602466 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.602473 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:42.602478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:42.602541 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:42.634016 1620518 cri.go:89] found id: ""
	I1209 04:45:42.634029 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.634037 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:42.634042 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:42.634102 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:42.665601 1620518 cri.go:89] found id: ""
	I1209 04:45:42.665614 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.665621 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:42.665627 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:42.665683 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:42.692605 1620518 cri.go:89] found id: ""
	I1209 04:45:42.692618 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.692626 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:42.692631 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:42.692692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:42.719572 1620518 cri.go:89] found id: ""
	I1209 04:45:42.719585 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.719592 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:42.719598 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:42.719660 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:42.745298 1620518 cri.go:89] found id: ""
	I1209 04:45:42.745312 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.745319 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:42.745324 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:42.745391 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:42.770685 1620518 cri.go:89] found id: ""
	I1209 04:45:42.770698 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.770706 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:42.770714 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:42.770724 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.840866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:42.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:42.871659 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:42.871676 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:42.941154 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:42.941174 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:42.956621 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:42.956638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:43.026115 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.527782 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:45.537648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:45.537707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:45.564248 1620518 cri.go:89] found id: ""
	I1209 04:45:45.564263 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.564270 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:45.564277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:45.564337 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:45.599479 1620518 cri.go:89] found id: ""
	I1209 04:45:45.599492 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.599499 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:45.599504 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:45.599560 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:45.629541 1620518 cri.go:89] found id: ""
	I1209 04:45:45.629554 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.629563 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:45.629568 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:45.629624 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:45.660451 1620518 cri.go:89] found id: ""
	I1209 04:45:45.660465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.660472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:45.660477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:45.660537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:45.686489 1620518 cri.go:89] found id: ""
	I1209 04:45:45.686503 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.686509 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:45.686514 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:45.686616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:45.711940 1620518 cri.go:89] found id: ""
	I1209 04:45:45.711954 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.711961 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:45.711967 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:45.712025 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:45.737703 1620518 cri.go:89] found id: ""
	I1209 04:45:45.737717 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.737724 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:45.737732 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:45.737745 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:45.802439 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.802451 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:45.802474 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:45.871530 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:45.871550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:45.901994 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:45.902010 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:45.973222 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:45.973241 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.488532 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:48.499003 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:48.499072 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:48.524749 1620518 cri.go:89] found id: ""
	I1209 04:45:48.524762 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.524769 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:48.524774 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:48.524830 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:48.553895 1620518 cri.go:89] found id: ""
	I1209 04:45:48.553909 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.553917 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:48.553922 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:48.553984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:48.581047 1620518 cri.go:89] found id: ""
	I1209 04:45:48.581069 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.581078 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:48.581084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:48.581153 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:48.614680 1620518 cri.go:89] found id: ""
	I1209 04:45:48.614693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.614701 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:48.614706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:48.614774 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:48.643818 1620518 cri.go:89] found id: ""
	I1209 04:45:48.643832 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.643839 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:48.643845 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:48.643919 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:48.669618 1620518 cri.go:89] found id: ""
	I1209 04:45:48.669632 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.669642 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:48.669647 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:48.669710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:48.699049 1620518 cri.go:89] found id: ""
	I1209 04:45:48.699063 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.699070 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:48.699077 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:48.699088 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:48.731315 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:48.731331 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:48.798219 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:48.798239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.813603 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:48.813620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:48.877674 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:48.877684 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:48.877695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.447558 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:51.457634 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:51.457694 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:51.487281 1620518 cri.go:89] found id: ""
	I1209 04:45:51.487294 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.487301 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:51.487306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:51.487364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:51.518737 1620518 cri.go:89] found id: ""
	I1209 04:45:51.518751 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.518758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:51.518763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:51.518837 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:51.544469 1620518 cri.go:89] found id: ""
	I1209 04:45:51.544481 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.544488 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:51.544493 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:51.544549 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:51.569588 1620518 cri.go:89] found id: ""
	I1209 04:45:51.569602 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.569624 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:51.569628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:51.569687 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:51.612979 1620518 cri.go:89] found id: ""
	I1209 04:45:51.612992 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.612999 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:51.613004 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:51.613062 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:51.646866 1620518 cri.go:89] found id: ""
	I1209 04:45:51.646880 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.646886 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:51.646892 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:51.646954 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:51.672767 1620518 cri.go:89] found id: ""
	I1209 04:45:51.672781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.672788 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:51.672795 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:51.672805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:51.738601 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:51.738620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:51.753536 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:51.753553 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:51.823113 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:51.823124 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:51.823134 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.895060 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:51.895078 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.424057 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:54.434546 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:54.434637 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:54.461148 1620518 cri.go:89] found id: ""
	I1209 04:45:54.461161 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.461179 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:54.461185 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:54.461245 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:54.491296 1620518 cri.go:89] found id: ""
	I1209 04:45:54.491310 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.491316 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:54.491322 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:54.491377 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:54.517141 1620518 cri.go:89] found id: ""
	I1209 04:45:54.517155 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.517162 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:54.517168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:54.517228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:54.543226 1620518 cri.go:89] found id: ""
	I1209 04:45:54.543245 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.543252 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:54.543258 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:54.543318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:54.574984 1620518 cri.go:89] found id: ""
	I1209 04:45:54.574998 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.575005 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:54.575010 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:54.575069 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:54.612321 1620518 cri.go:89] found id: ""
	I1209 04:45:54.612335 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.612342 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:54.612347 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:54.612405 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:54.639817 1620518 cri.go:89] found id: ""
	I1209 04:45:54.639831 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.639839 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:54.639847 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:54.639858 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:54.704579 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:54.704588 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:54.704610 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:54.772943 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:54.772962 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.802082 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:54.802097 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:54.873250 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:54.873278 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.389092 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:57.399566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:57.399631 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:57.424671 1620518 cri.go:89] found id: ""
	I1209 04:45:57.424685 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.424692 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:57.424698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:57.424755 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:57.449520 1620518 cri.go:89] found id: ""
	I1209 04:45:57.449533 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.449549 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:57.449554 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:57.449612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:57.474934 1620518 cri.go:89] found id: ""
	I1209 04:45:57.474949 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.474956 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:57.474961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:57.475017 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:57.504272 1620518 cri.go:89] found id: ""
	I1209 04:45:57.504285 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.504292 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:57.504297 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:57.504355 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:57.530784 1620518 cri.go:89] found id: ""
	I1209 04:45:57.530797 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.530804 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:57.530820 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:57.530878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:57.556189 1620518 cri.go:89] found id: ""
	I1209 04:45:57.556202 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.556209 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:57.556214 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:57.556271 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:57.584245 1620518 cri.go:89] found id: ""
	I1209 04:45:57.584258 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.584266 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:57.584273 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:57.584286 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:57.618235 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:57.618250 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:57.693384 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:57.693403 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.708210 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:57.708227 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:57.773409 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:57.773420 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:57.773430 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:00.342809 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:00.358795 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:00.358876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:00.400877 1620518 cri.go:89] found id: ""
	I1209 04:46:00.400892 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.400900 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:00.400906 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:00.400970 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:00.431798 1620518 cri.go:89] found id: ""
	I1209 04:46:00.431813 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.431820 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:00.431828 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:00.431892 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:00.460666 1620518 cri.go:89] found id: ""
	I1209 04:46:00.460686 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.460693 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:00.460698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:00.460761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:00.488457 1620518 cri.go:89] found id: ""
	I1209 04:46:00.488471 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.488479 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:00.488484 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:00.488551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:00.517784 1620518 cri.go:89] found id: ""
	I1209 04:46:00.517797 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.517805 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:00.517810 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:00.517873 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:00.545946 1620518 cri.go:89] found id: ""
	I1209 04:46:00.545960 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.545968 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:00.545973 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:00.546035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:00.575131 1620518 cri.go:89] found id: ""
	I1209 04:46:00.575153 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.575161 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:00.575168 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:00.575179 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:00.612360 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:00.612379 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:00.689205 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:00.689224 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:00.704596 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:00.704612 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:00.770156 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:00.770165 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:00.770175 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.338719 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:03.349336 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:03.349402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:03.374937 1620518 cri.go:89] found id: ""
	I1209 04:46:03.374950 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.374957 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:03.374963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:03.375022 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:03.405176 1620518 cri.go:89] found id: ""
	I1209 04:46:03.405206 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.405213 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:03.405219 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:03.405285 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:03.434836 1620518 cri.go:89] found id: ""
	I1209 04:46:03.434860 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.434868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:03.434874 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:03.434948 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:03.464055 1620518 cri.go:89] found id: ""
	I1209 04:46:03.464077 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.464085 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:03.464090 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:03.464189 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:03.493083 1620518 cri.go:89] found id: ""
	I1209 04:46:03.493106 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.493114 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:03.493119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:03.493194 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:03.518929 1620518 cri.go:89] found id: ""
	I1209 04:46:03.518942 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.518950 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:03.518955 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:03.519016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:03.543738 1620518 cri.go:89] found id: ""
	I1209 04:46:03.543751 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.543758 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:03.543766 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:03.543776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.611972 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:03.611992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:03.644882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:03.644905 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:03.715853 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:03.715873 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:03.730852 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:03.730870 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:03.797963 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.299034 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:06.310369 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:06.310430 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:06.338011 1620518 cri.go:89] found id: ""
	I1209 04:46:06.338024 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.338031 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:06.338037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:06.338093 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:06.364537 1620518 cri.go:89] found id: ""
	I1209 04:46:06.364551 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.364558 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:06.364566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:06.364621 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:06.390874 1620518 cri.go:89] found id: ""
	I1209 04:46:06.390894 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.390907 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:06.390912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:06.390972 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:06.416068 1620518 cri.go:89] found id: ""
	I1209 04:46:06.416082 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.416088 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:06.416093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:06.416152 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:06.445711 1620518 cri.go:89] found id: ""
	I1209 04:46:06.445724 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.445731 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:06.445736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:06.445794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:06.472619 1620518 cri.go:89] found id: ""
	I1209 04:46:06.472632 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.472639 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:06.472644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:06.472704 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:06.501335 1620518 cri.go:89] found id: ""
	I1209 04:46:06.501348 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.501355 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:06.501372 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:06.501382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:06.564989 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.564998 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:06.565009 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:06.636608 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:06.636626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:06.667969 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:06.667986 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:06.734125 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:06.734145 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:09.249456 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:09.259765 1620518 kubeadm.go:602] duration metric: took 4m2.693827645s to restartPrimaryControlPlane
	W1209 04:46:09.259826 1620518 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:46:09.259905 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:46:09.672351 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:46:09.685870 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:46:09.693855 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:46:09.693913 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:46:09.701686 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:46:09.701697 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:46:09.701750 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:46:09.709486 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:46:09.709542 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:46:09.717080 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:46:09.724681 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:46:09.724735 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:46:09.732335 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.740201 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:46:09.740255 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.747717 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:46:09.755316 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:46:09.755370 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:46:09.762723 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:46:09.800341 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:46:09.800668 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:46:09.867665 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:46:09.867727 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:46:09.867766 1620518 kubeadm.go:319] OS: Linux
	I1209 04:46:09.867807 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:46:09.867852 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:46:09.867896 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:46:09.867942 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:46:09.867987 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:46:09.868032 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:46:09.868074 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:46:09.868120 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:46:09.868162 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:46:09.937281 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:46:09.937384 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:46:09.937481 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:46:09.947317 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:46:09.952721 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:46:09.952808 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:46:09.952877 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:46:09.952958 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:46:09.953021 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:46:09.953092 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:46:09.953141 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:46:09.953206 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:46:09.953269 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:46:09.953343 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:46:09.953417 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:46:09.953461 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:46:09.953513 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:46:10.029245 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:46:10.224354 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:46:10.667691 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:46:10.882600 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:46:11.073140 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:46:11.073694 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:46:11.076408 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:46:11.079859 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:46:11.079965 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:46:11.080042 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:46:11.080114 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:46:11.095853 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:46:11.095951 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:46:11.104994 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:46:11.105485 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:46:11.105715 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:46:11.236975 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:46:11.237088 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:50:11.237231 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000344141s
	I1209 04:50:11.237256 1620518 kubeadm.go:319] 
	I1209 04:50:11.237309 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:50:11.237340 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:50:11.237438 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:50:11.237443 1620518 kubeadm.go:319] 
	I1209 04:50:11.237541 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:50:11.237571 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:50:11.237600 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:50:11.237603 1620518 kubeadm.go:319] 
	I1209 04:50:11.241458 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:50:11.241910 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:50:11.242023 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:50:11.242266 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:50:11.242272 1620518 kubeadm.go:319] 
	I1209 04:50:11.242336 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 04:50:11.242454 1620518 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000344141s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:50:11.242544 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:50:11.655787 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:50:11.668676 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:50:11.668730 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:50:11.676546 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:50:11.676562 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:50:11.676612 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:50:11.684172 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:50:11.684236 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:50:11.691594 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:50:11.699302 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:50:11.699363 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:50:11.706772 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.714846 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:50:11.714902 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.722267 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:50:11.730186 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:50:11.730250 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:50:11.738143 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:50:11.781074 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:50:11.781123 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:50:11.856141 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:50:11.856206 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:50:11.856240 1620518 kubeadm.go:319] OS: Linux
	I1209 04:50:11.856283 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:50:11.856330 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:50:11.856377 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:50:11.856424 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:50:11.856471 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:50:11.856522 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:50:11.856566 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:50:11.856614 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:50:11.856660 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:50:11.927746 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:50:11.927875 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:50:11.927971 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:50:11.934983 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:50:11.938507 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:50:11.938697 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:50:11.938772 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:50:11.938867 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:50:11.938937 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:50:11.939018 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:50:11.939071 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:50:11.939143 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:50:11.939213 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:50:11.939302 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:50:11.939383 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:50:11.939690 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:50:11.939748 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:50:12.353584 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:50:12.812738 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:50:13.265058 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:50:13.417250 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:50:13.472548 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:50:13.473076 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:50:13.475724 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:50:13.478920 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:50:13.479026 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:50:13.479104 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:50:13.479930 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:50:13.496348 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:50:13.496458 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:50:13.504378 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:50:13.504655 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:50:13.504696 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:50:13.630713 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:50:13.630826 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:54:13.630972 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000259173s
	I1209 04:54:13.630997 1620518 kubeadm.go:319] 
	I1209 04:54:13.631053 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:54:13.631086 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:54:13.631200 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:54:13.631206 1620518 kubeadm.go:319] 
	I1209 04:54:13.631310 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:54:13.631395 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:54:13.631461 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:54:13.631466 1620518 kubeadm.go:319] 
	I1209 04:54:13.635649 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:54:13.636127 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:54:13.636242 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:54:13.636479 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:54:13.636485 1620518 kubeadm.go:319] 
	I1209 04:54:13.636553 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:54:13.636616 1620518 kubeadm.go:403] duration metric: took 12m7.110467735s to StartCluster
	I1209 04:54:13.636648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:54:13.636715 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:54:13.662011 1620518 cri.go:89] found id: ""
	I1209 04:54:13.662024 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.662032 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:54:13.662037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:54:13.662094 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:54:13.688278 1620518 cri.go:89] found id: ""
	I1209 04:54:13.688293 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.688299 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:54:13.688304 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:54:13.688363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:54:13.714700 1620518 cri.go:89] found id: ""
	I1209 04:54:13.714715 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.714723 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:54:13.714729 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:54:13.714795 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:54:13.740152 1620518 cri.go:89] found id: ""
	I1209 04:54:13.740166 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.740173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:54:13.740178 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:54:13.740235 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:54:13.766214 1620518 cri.go:89] found id: ""
	I1209 04:54:13.766227 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.766235 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:54:13.766240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:54:13.766300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:54:13.793141 1620518 cri.go:89] found id: ""
	I1209 04:54:13.793155 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.793162 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:54:13.793168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:54:13.793225 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:54:13.824264 1620518 cri.go:89] found id: ""
	I1209 04:54:13.824278 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.824286 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:54:13.824294 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:54:13.824305 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:54:13.865509 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:54:13.865527 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:54:13.944055 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:54:13.944075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:54:13.960571 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:54:13.960593 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:54:14.028160 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:54:14.028170 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:54:14.028180 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 04:54:14.099915 1620518 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:54:14.099962 1620518 out.go:285] * 
	W1209 04:54:14.100108 1620518 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.100197 1620518 out.go:285] * 
	W1209 04:54:14.102317 1620518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:54:14.107888 1620518 out.go:203] 
	W1209 04:54:14.111655 1620518 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.111892 1620518 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:54:14.111932 1620518 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:54:14.116964 1620518 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927580587Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927620637Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927668178Z" level=info msg="Create NRI interface"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927758033Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927766493Z" level=info msg="runtime interface created"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927780007Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927786308Z" level=info msg="runtime interface starting up..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927792741Z" level=info msg="starting plugins..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927805771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927872323Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:42:04 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.942951614Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d42015e0-8a7e-47f7-95a2-398ea8aa48f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.943749037Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=554d2336-7df0-4ab3-87a2-3f0040c79a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944291229Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=70fb14c4-f971-4387-8e1b-10c98c4791aa name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944730675Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=36db540a-ff25-4b5c-b7d7-cd7322fbd4bb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945138629Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7427d70a-8db2-44c3-88f8-0607ec671ff6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945576229Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b63b04fd-62c4-4cf0-9b5b-23eef2eb12c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.946074564Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=287329f7-949c-4b5b-8433-0437004398fd name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:54:15.371704   21279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:15.372339   21279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:15.373532   21279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:15.374149   21279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:15.378321   21279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:54:15 up  9:36,  0 user,  load average: 0.05, 0.16, 0.43
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:54:13 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:54:13 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 09 04:54:13 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:13 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:13 functional-331811 kubelet[21149]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:13 functional-331811 kubelet[21149]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:13 functional-331811 kubelet[21149]: E1209 04:54:13.894748   21149 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:54:13 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:54:13 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:54:14 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 09 04:54:14 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:14 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:14 functional-331811 kubelet[21196]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:14 functional-331811 kubelet[21196]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:14 functional-331811 kubelet[21196]: E1209 04:54:14.660459   21196 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:54:14 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:54:14 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:54:15 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 09 04:54:15 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:15 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:15 functional-331811 kubelet[21283]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:15 functional-331811 kubelet[21283]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:15 functional-331811 kubelet[21283]: E1209 04:54:15.379856   21283 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:54:15 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:54:15 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (352.293504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-331811 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-331811 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (59.348681ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-331811 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (304.827437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-790468 image ls --format short --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ ssh     │ functional-790468 ssh pgrep buildkitd                                                                                                             │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ image   │ functional-790468 image ls --format json --alsologtostderr                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls --format table --alsologtostderr                                                                                       │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr                                            │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ image   │ functional-790468 image ls                                                                                                                        │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ delete  │ -p functional-790468                                                                                                                              │ functional-790468 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │ 09 Dec 25 04:27 UTC │
	│ start   │ -p functional-331811 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:27 UTC │                     │
	│ start   │ -p functional-331811 --alsologtostderr -v=8                                                                                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:35 UTC │                     │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add registry.k8s.io/pause:latest                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache add minikube-local-cache-test:functional-331811                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ functional-331811 cache delete minikube-local-cache-test:functional-331811                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl images                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ cache   │ functional-331811 cache reload                                                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ kubectl │ functional-331811 kubectl -- --context functional-331811 get pods                                                                                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ start   │ -p functional-331811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:42:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:42:01.637786 1620518 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:42:01.637909 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.637913 1620518 out.go:374] Setting ErrFile to fd 2...
	I1209 04:42:01.637918 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.638166 1620518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:42:01.638522 1620518 out.go:368] Setting JSON to false
	I1209 04:42:01.639450 1620518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33862,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:42:01.639510 1620518 start.go:143] virtualization:  
	I1209 04:42:01.642955 1620518 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:42:01.646014 1620518 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:42:01.646101 1620518 notify.go:221] Checking for updates...
	I1209 04:42:01.651837 1620518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:42:01.654857 1620518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:42:01.657670 1620518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:42:01.660510 1620518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:42:01.663383 1620518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:42:01.666731 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:01.666828 1620518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:42:01.689070 1620518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:42:01.689175 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.744025 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.734708732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.744121 1620518 docker.go:319] overlay module found
	I1209 04:42:01.749121 1620518 out.go:179] * Using the docker driver based on existing profile
	I1209 04:42:01.751932 1620518 start.go:309] selected driver: docker
	I1209 04:42:01.751941 1620518 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.752051 1620518 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:42:01.752158 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.824076 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.81179321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.824456 1620518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:42:01.824480 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:01.824537 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:01.824578 1620518 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.827700 1620518 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:42:01.830624 1620518 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:42:01.833519 1620518 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:42:01.836178 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:01.836217 1620518 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:42:01.836228 1620518 cache.go:65] Caching tarball of preloaded images
	I1209 04:42:01.836255 1620518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:42:01.836324 1620518 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:42:01.836333 1620518 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:42:01.836451 1620518 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:42:01.855430 1620518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:42:01.855441 1620518 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:42:01.855455 1620518 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:42:01.855485 1620518 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:42:01.855543 1620518 start.go:364] duration metric: took 40.87µs to acquireMachinesLock for "functional-331811"
	I1209 04:42:01.855566 1620518 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:42:01.855570 1620518 fix.go:54] fixHost starting: 
	I1209 04:42:01.855819 1620518 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:42:01.873325 1620518 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:42:01.873351 1620518 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:42:01.876665 1620518 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:42:01.876693 1620518 machine.go:94] provisionDockerMachine start ...
	I1209 04:42:01.876797 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:01.894796 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:01.895121 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:01.895129 1620518 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:42:02.058680 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.058696 1620518 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:42:02.058761 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.090920 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.091365 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.091379 1620518 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:42:02.262883 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.262960 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.281315 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.281623 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.281637 1620518 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:42:02.435135 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:42:02.435152 1620518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:42:02.435179 1620518 ubuntu.go:190] setting up certificates
	I1209 04:42:02.435197 1620518 provision.go:84] configureAuth start
	I1209 04:42:02.435267 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:02.452748 1620518 provision.go:143] copyHostCerts
	I1209 04:42:02.452806 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:42:02.452813 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:42:02.452891 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:42:02.452996 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:42:02.453000 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:42:02.453027 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:42:02.453088 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:42:02.453092 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:42:02.453121 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:42:02.453207 1620518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:42:02.729112 1620518 provision.go:177] copyRemoteCerts
	I1209 04:42:02.729174 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:42:02.729226 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.747750 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:02.856241 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:42:02.877475 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:42:02.898967 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:42:02.917189 1620518 provision.go:87] duration metric: took 481.970064ms to configureAuth
	I1209 04:42:02.917207 1620518 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:42:02.917407 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:02.917510 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.935642 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.935957 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.935968 1620518 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:42:03.293502 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:42:03.293517 1620518 machine.go:97] duration metric: took 1.416817164s to provisionDockerMachine
	I1209 04:42:03.293527 1620518 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:42:03.293537 1620518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:42:03.293597 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:42:03.293653 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.312696 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.419010 1620518 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:42:03.422897 1620518 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:42:03.422917 1620518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:42:03.422927 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:42:03.422995 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:42:03.423075 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:42:03.423167 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:42:03.423212 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:42:03.431449 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:03.450423 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:42:03.470159 1620518 start.go:296] duration metric: took 176.617533ms for postStartSetup
	I1209 04:42:03.470235 1620518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:42:03.470292 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.488346 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.593519 1620518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:42:03.598841 1620518 fix.go:56] duration metric: took 1.743264094s for fixHost
	I1209 04:42:03.598859 1620518 start.go:83] releasing machines lock for "functional-331811", held for 1.743308418s
	I1209 04:42:03.598929 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:03.617266 1620518 ssh_runner.go:195] Run: cat /version.json
	I1209 04:42:03.617315 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.617558 1620518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:42:03.617603 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.646611 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.653495 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.852499 1620518 ssh_runner.go:195] Run: systemctl --version
	I1209 04:42:03.859513 1620518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:42:03.897674 1620518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:42:03.902590 1620518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:42:03.902664 1620518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:42:03.911194 1620518 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:42:03.911208 1620518 start.go:496] detecting cgroup driver to use...
	I1209 04:42:03.911240 1620518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:42:03.911304 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:42:03.926479 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:42:03.940314 1620518 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:42:03.940374 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:42:03.956989 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:42:03.970857 1620518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:42:04.105722 1620518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:42:04.221024 1620518 docker.go:234] disabling docker service ...
	I1209 04:42:04.221082 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:42:04.236606 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:42:04.259126 1620518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:42:04.406348 1620518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:42:04.537870 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:42:04.550770 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:42:04.565609 1620518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:42:04.565666 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.574449 1620518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:42:04.574512 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.583819 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.592696 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.601828 1620518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:42:04.610342 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.619401 1620518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.628176 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.637069 1620518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:42:04.644806 1620518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:42:04.652309 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:04.767112 1620518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:42:04.935446 1620518 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:42:04.935507 1620518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:42:04.939304 1620518 start.go:564] Will wait 60s for crictl version
	I1209 04:42:04.939369 1620518 ssh_runner.go:195] Run: which crictl
	I1209 04:42:04.942772 1620518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:42:04.967172 1620518 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:42:04.967246 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.000450 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.039508 1620518 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:42:05.042351 1620518 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:42:05.058209 1620518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:42:05.065398 1620518 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:42:05.068071 1620518 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:42:05.068222 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:05.068288 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.125308 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.125320 1620518 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:42:05.125384 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.156125 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.156137 1620518 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:42:05.156143 1620518 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:42:05.156245 1620518 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:42:05.156329 1620518 ssh_runner.go:195] Run: crio config
	I1209 04:42:05.230295 1620518 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:42:05.230327 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:05.230335 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:05.230348 1620518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:42:05.230371 1620518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:42:05.230520 1620518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:42:05.230600 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:42:05.238799 1620518 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:42:05.238882 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:42:05.246819 1620518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:42:05.260010 1620518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:42:05.273192 1620518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1209 04:42:05.287174 1620518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:42:05.291010 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:05.412581 1620518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:42:05.825078 1620518 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:42:05.825089 1620518 certs.go:195] generating shared ca certs ...
	I1209 04:42:05.825104 1620518 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:42:05.825273 1620518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:42:05.825311 1620518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:42:05.825317 1620518 certs.go:257] generating profile certs ...
	I1209 04:42:05.825400 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:42:05.825453 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:42:05.825489 1620518 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:42:05.825606 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:42:05.825637 1620518 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:42:05.825643 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:42:05.825670 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:42:05.825692 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:42:05.825717 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:42:05.825764 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:05.826339 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:42:05.847398 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:42:05.867264 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:42:05.887896 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:42:05.907076 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:42:05.926224 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:42:05.944236 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:42:05.962834 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:42:05.981333 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:42:06.001204 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:42:06.024226 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:42:06.044638 1620518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:42:06.059443 1620518 ssh_runner.go:195] Run: openssl version
	I1209 04:42:06.066215 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.074237 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:42:06.083015 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087232 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087310 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.129553 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:42:06.137400 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.144988 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:42:06.152871 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156811 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156876 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.198268 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:42:06.205673 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.212766 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:42:06.220239 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.223985 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.224039 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.265241 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:42:06.272666 1620518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:42:06.276249 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:42:06.318459 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:42:06.361504 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:42:06.402819 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:42:06.443793 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:42:06.485065 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:42:06.526159 1620518 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:06.526240 1620518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:42:06.526302 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.557743 1620518 cri.go:89] found id: ""
	I1209 04:42:06.557806 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:42:06.565919 1620518 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:42:06.565929 1620518 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:42:06.565979 1620518 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:42:06.574421 1620518 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.574975 1620518 kubeconfig.go:125] found "functional-331811" server: "https://192.168.49.2:8441"
	I1209 04:42:06.576238 1620518 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:42:06.585800 1620518 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:27:27.994828232 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:42:05.282481991 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:42:06.585820 1620518 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:42:06.585830 1620518 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 04:42:06.585887 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.615364 1620518 cri.go:89] found id: ""
	I1209 04:42:06.615424 1620518 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:42:06.632416 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:42:06.640276 1620518 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  9 04:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  9 04:31 /etc/kubernetes/scheduler.conf
	
	I1209 04:42:06.640334 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:42:06.648234 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:42:06.655526 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.655581 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:42:06.663036 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.670853 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.670911 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.678990 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:42:06.687863 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.687915 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:42:06.696417 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:42:06.705368 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:06.756797 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.115058 1620518 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.358236541s)
	I1209 04:42:08.115116 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.320381 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.380846 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.425206 1620518 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:42:08.425277 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:08.925770 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.425673 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.926006 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.426138 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.926333 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.426044 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.925865 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.426407 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.925704 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.425999 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.926113 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.426341 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.926036 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.425471 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.926251 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.426322 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.925477 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.426300 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.926252 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.426140 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.925451 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.426343 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.925709 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.426256 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.925497 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.425570 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.926150 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.425937 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.926432 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.425437 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.926221 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.425823 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.926268 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.426017 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.926031 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.425377 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.925360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.425992 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.925571 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.425482 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.926361 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.426063 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.926242 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.425494 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.926061 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.425707 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.925370 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.426205 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.926119 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.426163 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.925480 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.425584 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.926360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.426207 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.926064 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.426077 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.925371 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.426110 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.425443 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.926209 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.426345 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.925457 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.426372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.926174 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.426131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.926382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.426266 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.926376 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.425722 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.925468 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.425612 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.925853 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.425892 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.925441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.425589 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.926038 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.425591 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.926409 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.426312 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.925878 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.425458 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.925689 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.426143 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.926139 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.426335 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.926396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.425396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.925485 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.425608 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.925545 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.425421 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.925703 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.925392 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.426241 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.925364 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.425372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.425848 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.925784 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.425624 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.425417 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.926188 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.426323 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.925858 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.425747 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.926082 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.925448 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.425655 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.925700 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.926215 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.425795 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.925648 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:08.425431 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:08.425513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:08.451611 1620518 cri.go:89] found id: ""
	I1209 04:43:08.451625 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.451634 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:08.451644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:08.451703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:08.478028 1620518 cri.go:89] found id: ""
	I1209 04:43:08.478042 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.478049 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:08.478054 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:08.478116 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:08.504952 1620518 cri.go:89] found id: ""
	I1209 04:43:08.504967 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.504974 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:08.504980 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:08.505037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:08.531444 1620518 cri.go:89] found id: ""
	I1209 04:43:08.531460 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.531468 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:08.531473 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:08.531558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:08.557796 1620518 cri.go:89] found id: ""
	I1209 04:43:08.557810 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.557817 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:08.557822 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:08.557878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:08.589421 1620518 cri.go:89] found id: ""
	I1209 04:43:08.589436 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.589443 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:08.589448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:08.589505 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:08.626762 1620518 cri.go:89] found id: ""
	I1209 04:43:08.626776 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.626783 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:08.626792 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:08.626802 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:08.694456 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:08.694477 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:08.709310 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:08.709333 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:08.773551 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:08.773573 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:08.773584 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:08.840868 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:08.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.374296 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:11.384818 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:11.384880 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:11.413700 1620518 cri.go:89] found id: ""
	I1209 04:43:11.413713 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.413720 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:11.413725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:11.413783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:11.439148 1620518 cri.go:89] found id: ""
	I1209 04:43:11.439163 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.439170 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:11.439175 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:11.439236 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:11.468833 1620518 cri.go:89] found id: ""
	I1209 04:43:11.468847 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.468854 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:11.468859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:11.468917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:11.501328 1620518 cri.go:89] found id: ""
	I1209 04:43:11.501343 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.501350 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:11.501355 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:11.501420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:11.527673 1620518 cri.go:89] found id: ""
	I1209 04:43:11.527687 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.527695 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:11.527700 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:11.527757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:11.552531 1620518 cri.go:89] found id: ""
	I1209 04:43:11.552545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.552552 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:11.552557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:11.552618 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:11.591493 1620518 cri.go:89] found id: ""
	I1209 04:43:11.591507 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.591514 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:11.591522 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:11.591538 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.626001 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:11.626017 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:11.699914 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:11.699939 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:11.715894 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:11.715917 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:11.780735 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:11.780754 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:11.780765 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.352369 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:14.362558 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:14.362633 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:14.388407 1620518 cri.go:89] found id: ""
	I1209 04:43:14.388421 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.388428 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:14.388433 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:14.388490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:14.415937 1620518 cri.go:89] found id: ""
	I1209 04:43:14.415952 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.415960 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:14.415965 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:14.416029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:14.445418 1620518 cri.go:89] found id: ""
	I1209 04:43:14.445433 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.445440 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:14.445445 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:14.445513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:14.471362 1620518 cri.go:89] found id: ""
	I1209 04:43:14.471376 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.471383 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:14.471388 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:14.471452 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:14.503134 1620518 cri.go:89] found id: ""
	I1209 04:43:14.503148 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.503155 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:14.503160 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:14.503219 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:14.529790 1620518 cri.go:89] found id: ""
	I1209 04:43:14.529803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.529811 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:14.529816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:14.529889 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:14.555803 1620518 cri.go:89] found id: ""
	I1209 04:43:14.555817 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.555824 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:14.555832 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:14.555843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:14.632593 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:14.632611 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:14.648671 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:14.648687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:14.713371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:14.713382 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:14.713400 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.783824 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:14.783843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.318936 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:17.329339 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:17.329407 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:17.356311 1620518 cri.go:89] found id: ""
	I1209 04:43:17.356330 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.356351 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:17.356356 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:17.356416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:17.386438 1620518 cri.go:89] found id: ""
	I1209 04:43:17.386452 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.386460 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:17.386465 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:17.386528 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:17.411209 1620518 cri.go:89] found id: ""
	I1209 04:43:17.411222 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.411229 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:17.411234 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:17.411291 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:17.437189 1620518 cri.go:89] found id: ""
	I1209 04:43:17.437201 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.437208 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:17.437229 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:17.437286 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:17.463836 1620518 cri.go:89] found id: ""
	I1209 04:43:17.463850 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.463857 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:17.463862 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:17.463945 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:17.490604 1620518 cri.go:89] found id: ""
	I1209 04:43:17.490617 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.490625 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:17.490630 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:17.490691 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:17.517583 1620518 cri.go:89] found id: ""
	I1209 04:43:17.517597 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.517605 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:17.517612 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:17.517623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:17.532622 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:17.532638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:17.611464 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:17.611477 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:17.611487 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:17.693672 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:17.693692 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.723232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:17.723249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.294145 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:20.304681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:20.304742 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:20.333282 1620518 cri.go:89] found id: ""
	I1209 04:43:20.333297 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.333304 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:20.333309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:20.333367 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:20.363210 1620518 cri.go:89] found id: ""
	I1209 04:43:20.363224 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.363231 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:20.363236 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:20.363300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:20.387964 1620518 cri.go:89] found id: ""
	I1209 04:43:20.387978 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.387985 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:20.387995 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:20.388054 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:20.414851 1620518 cri.go:89] found id: ""
	I1209 04:43:20.414864 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.414871 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:20.414876 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:20.414943 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:20.441500 1620518 cri.go:89] found id: ""
	I1209 04:43:20.441514 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.441521 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:20.441526 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:20.441584 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:20.468302 1620518 cri.go:89] found id: ""
	I1209 04:43:20.468318 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.468325 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:20.468331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:20.468393 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:20.497314 1620518 cri.go:89] found id: ""
	I1209 04:43:20.497328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.497345 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:20.497354 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:20.497364 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.570464 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:20.570492 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:20.586642 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:20.586660 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:20.665367 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:20.665378 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:20.665389 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:20.733648 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:20.733669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.265697 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:23.275834 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:23.275893 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:23.304587 1620518 cri.go:89] found id: ""
	I1209 04:43:23.304613 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.304620 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:23.304626 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:23.304692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:23.329381 1620518 cri.go:89] found id: ""
	I1209 04:43:23.329406 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.329414 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:23.329419 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:23.329485 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:23.355201 1620518 cri.go:89] found id: ""
	I1209 04:43:23.355215 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.355222 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:23.355227 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:23.355289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:23.380238 1620518 cri.go:89] found id: ""
	I1209 04:43:23.380251 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.380258 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:23.380263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:23.380322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:23.409750 1620518 cri.go:89] found id: ""
	I1209 04:43:23.409764 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.409771 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:23.409776 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:23.409838 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:23.437575 1620518 cri.go:89] found id: ""
	I1209 04:43:23.437588 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.437595 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:23.437600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:23.437657 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:23.464403 1620518 cri.go:89] found id: ""
	I1209 04:43:23.464418 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.464425 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:23.464432 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:23.464444 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:23.479567 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:23.479583 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:23.543433 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:23.543443 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:23.543454 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:23.620689 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:23.620709 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.660232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:23.660249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.230943 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:26.242046 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:26.242107 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:26.269716 1620518 cri.go:89] found id: ""
	I1209 04:43:26.269729 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.269736 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:26.269741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:26.269798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:26.296756 1620518 cri.go:89] found id: ""
	I1209 04:43:26.296771 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.296778 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:26.296783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:26.296844 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:26.325789 1620518 cri.go:89] found id: ""
	I1209 04:43:26.325803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.325810 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:26.325816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:26.325878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:26.362024 1620518 cri.go:89] found id: ""
	I1209 04:43:26.362037 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.362044 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:26.362049 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:26.362105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:26.389037 1620518 cri.go:89] found id: ""
	I1209 04:43:26.389051 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.389058 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:26.389063 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:26.389123 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:26.416773 1620518 cri.go:89] found id: ""
	I1209 04:43:26.416787 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.416794 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:26.416799 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:26.416854 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:26.442294 1620518 cri.go:89] found id: ""
	I1209 04:43:26.442308 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.442315 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:26.442323 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:26.442334 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.508604 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:26.508623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:26.523993 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:26.524013 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:26.599795 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:26.599816 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:26.599829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:26.676981 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:26.677003 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:29.206372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:29.216486 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:29.216547 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:29.241737 1620518 cri.go:89] found id: ""
	I1209 04:43:29.241752 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.241759 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:29.241764 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:29.241819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:29.275909 1620518 cri.go:89] found id: ""
	I1209 04:43:29.275922 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.275929 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:29.275935 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:29.275993 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:29.300470 1620518 cri.go:89] found id: ""
	I1209 04:43:29.300483 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.300490 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:29.300495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:29.300552 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:29.326081 1620518 cri.go:89] found id: ""
	I1209 04:43:29.326094 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.326101 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:29.326106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:29.326166 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:29.353323 1620518 cri.go:89] found id: ""
	I1209 04:43:29.353337 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.353344 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:29.353349 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:29.353414 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:29.378490 1620518 cri.go:89] found id: ""
	I1209 04:43:29.378505 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.378512 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:29.378517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:29.378599 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:29.404558 1620518 cri.go:89] found id: ""
	I1209 04:43:29.404571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.404578 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:29.404585 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:29.404595 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:29.470257 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:29.470277 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:29.485347 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:29.485368 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:29.550659 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:29.550676 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:29.550687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:29.628618 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:29.628639 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:32.159988 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:32.170169 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:32.170227 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:32.195475 1620518 cri.go:89] found id: ""
	I1209 04:43:32.195489 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.195496 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:32.195502 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:32.195558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:32.221067 1620518 cri.go:89] found id: ""
	I1209 04:43:32.221080 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.221088 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:32.221093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:32.221160 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:32.247302 1620518 cri.go:89] found id: ""
	I1209 04:43:32.247315 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.247322 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:32.247327 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:32.247388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:32.273214 1620518 cri.go:89] found id: ""
	I1209 04:43:32.273227 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.273234 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:32.273239 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:32.273296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:32.301827 1620518 cri.go:89] found id: ""
	I1209 04:43:32.301842 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.301849 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:32.301855 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:32.301920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:32.327504 1620518 cri.go:89] found id: ""
	I1209 04:43:32.327518 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.327526 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:32.327531 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:32.327592 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:32.354211 1620518 cri.go:89] found id: ""
	I1209 04:43:32.354225 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.354232 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:32.354240 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:32.354251 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:32.424906 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:32.424926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:32.440380 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:32.440396 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:32.508486 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:32.508496 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:32.508506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:32.577521 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:32.577541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:35.111262 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:35.121574 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:35.121636 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:35.147108 1620518 cri.go:89] found id: ""
	I1209 04:43:35.147121 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.147128 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:35.147134 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:35.147193 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:35.172557 1620518 cri.go:89] found id: ""
	I1209 04:43:35.172571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.172578 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:35.172583 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:35.172644 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:35.200994 1620518 cri.go:89] found id: ""
	I1209 04:43:35.201007 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.201020 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:35.201025 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:35.201082 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:35.230443 1620518 cri.go:89] found id: ""
	I1209 04:43:35.230457 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.230470 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:35.230476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:35.230536 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:35.255703 1620518 cri.go:89] found id: ""
	I1209 04:43:35.255716 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.255723 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:35.255728 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:35.255786 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:35.281749 1620518 cri.go:89] found id: ""
	I1209 04:43:35.281762 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.281780 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:35.281786 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:35.281852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:35.306677 1620518 cri.go:89] found id: ""
	I1209 04:43:35.306690 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.306697 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:35.306705 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:35.306715 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:35.375938 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:35.375957 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:35.390955 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:35.390984 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:35.457222 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:35.457240 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:35.457252 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:35.526131 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:35.526150 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:38.057096 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:38.068039 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:38.068101 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:38.097645 1620518 cri.go:89] found id: ""
	I1209 04:43:38.097659 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.097666 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:38.097672 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:38.097730 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:38.125024 1620518 cri.go:89] found id: ""
	I1209 04:43:38.125038 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.125045 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:38.125051 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:38.125106 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:38.158551 1620518 cri.go:89] found id: ""
	I1209 04:43:38.158565 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.158597 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:38.158602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:38.158667 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:38.185732 1620518 cri.go:89] found id: ""
	I1209 04:43:38.185746 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.185753 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:38.185758 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:38.185817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:38.211917 1620518 cri.go:89] found id: ""
	I1209 04:43:38.211931 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.211938 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:38.211944 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:38.212003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:38.242391 1620518 cri.go:89] found id: ""
	I1209 04:43:38.242407 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.242414 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:38.242420 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:38.242495 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:38.268565 1620518 cri.go:89] found id: ""
	I1209 04:43:38.268598 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.268606 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:38.268616 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:38.268628 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:38.335336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:38.335355 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:38.350651 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:38.350667 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:38.413931 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:38.413941 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:38.413952 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:38.481874 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:38.481894 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.013724 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:41.024462 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:41.024521 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:41.050950 1620518 cri.go:89] found id: ""
	I1209 04:43:41.050965 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.050973 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:41.050979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:41.051050 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:41.080781 1620518 cri.go:89] found id: ""
	I1209 04:43:41.080794 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.080801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:41.080806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:41.080864 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:41.107039 1620518 cri.go:89] found id: ""
	I1209 04:43:41.107053 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.107059 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:41.107064 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:41.107122 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:41.131302 1620518 cri.go:89] found id: ""
	I1209 04:43:41.131316 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.131323 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:41.131328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:41.131387 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:41.160541 1620518 cri.go:89] found id: ""
	I1209 04:43:41.160554 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.160560 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:41.160566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:41.160623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:41.189715 1620518 cri.go:89] found id: ""
	I1209 04:43:41.189728 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.189735 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:41.189741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:41.189798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:41.215532 1620518 cri.go:89] found id: ""
	I1209 04:43:41.215545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.215552 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:41.215559 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:41.215570 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.248230 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:41.248245 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:41.316564 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:41.316589 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:41.332031 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:41.332048 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:41.399707 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:41.399720 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:41.399733 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:43.973310 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:43.983577 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:43.983641 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:44.019270 1620518 cri.go:89] found id: ""
	I1209 04:43:44.019285 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.019292 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:44.019298 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:44.019362 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:44.046326 1620518 cri.go:89] found id: ""
	I1209 04:43:44.046340 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.046347 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:44.046353 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:44.046416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:44.073718 1620518 cri.go:89] found id: ""
	I1209 04:43:44.073732 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.073739 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:44.073745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:44.073806 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:44.099804 1620518 cri.go:89] found id: ""
	I1209 04:43:44.099818 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.099825 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:44.099830 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:44.099888 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:44.125332 1620518 cri.go:89] found id: ""
	I1209 04:43:44.125346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.125353 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:44.125358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:44.125418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:44.153398 1620518 cri.go:89] found id: ""
	I1209 04:43:44.153413 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.153420 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:44.153438 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:44.153501 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:44.181868 1620518 cri.go:89] found id: ""
	I1209 04:43:44.181882 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.181889 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:44.181909 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:44.181919 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:44.197827 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:44.197843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:44.262818 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:44.262829 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:44.262840 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:44.331403 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:44.331423 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:44.363934 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:44.363951 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:46.935826 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:46.946383 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:46.946442 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:46.972025 1620518 cri.go:89] found id: ""
	I1209 04:43:46.972039 1620518 logs.go:282] 0 containers: []
	W1209 04:43:46.972046 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:46.972052 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:46.972114 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:47.005389 1620518 cri.go:89] found id: ""
	I1209 04:43:47.005411 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.005428 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:47.005434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:47.005503 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:47.034137 1620518 cri.go:89] found id: ""
	I1209 04:43:47.034151 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.034159 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:47.034164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:47.034224 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:47.060061 1620518 cri.go:89] found id: ""
	I1209 04:43:47.060074 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.060081 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:47.060086 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:47.060155 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:47.087325 1620518 cri.go:89] found id: ""
	I1209 04:43:47.087339 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.087346 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:47.087351 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:47.087412 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:47.113243 1620518 cri.go:89] found id: ""
	I1209 04:43:47.113257 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.113265 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:47.113271 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:47.113333 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:47.139697 1620518 cri.go:89] found id: ""
	I1209 04:43:47.139710 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.139718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:47.139725 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:47.139735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:47.208645 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:47.208665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:47.224099 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:47.224118 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:47.291121 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:47.291131 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:47.291143 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:47.360007 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:47.360028 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:49.894321 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:49.904751 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:49.904813 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:49.933138 1620518 cri.go:89] found id: ""
	I1209 04:43:49.933152 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.933160 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:49.933165 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:49.933223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:49.959143 1620518 cri.go:89] found id: ""
	I1209 04:43:49.959156 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.959163 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:49.959174 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:49.959231 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:49.984103 1620518 cri.go:89] found id: ""
	I1209 04:43:49.984118 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.984125 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:49.984130 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:49.984188 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:50.019299 1620518 cri.go:89] found id: ""
	I1209 04:43:50.019314 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.019322 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:50.019328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:50.019394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:50.050759 1620518 cri.go:89] found id: ""
	I1209 04:43:50.050773 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.050780 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:50.050785 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:50.050852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:50.077915 1620518 cri.go:89] found id: ""
	I1209 04:43:50.077929 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.077937 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:50.077942 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:50.078003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:50.105340 1620518 cri.go:89] found id: ""
	I1209 04:43:50.105354 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.105361 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:50.105369 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:50.105382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:50.176940 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:50.176950 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:50.176961 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:50.250014 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:50.250035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:50.279274 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:50.279290 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:50.344336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:50.344354 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:52.861162 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:52.873255 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:52.873331 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:52.901728 1620518 cri.go:89] found id: ""
	I1209 04:43:52.901743 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.901750 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:52.901756 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:52.901847 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:52.927167 1620518 cri.go:89] found id: ""
	I1209 04:43:52.927180 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.927187 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:52.927192 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:52.927252 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:52.953243 1620518 cri.go:89] found id: ""
	I1209 04:43:52.953256 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.953263 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:52.953268 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:52.953326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:52.981127 1620518 cri.go:89] found id: ""
	I1209 04:43:52.981140 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.981147 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:52.981152 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:52.981210 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:53.014584 1620518 cri.go:89] found id: ""
	I1209 04:43:53.014600 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.014608 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:53.014613 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:53.014681 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:53.041932 1620518 cri.go:89] found id: ""
	I1209 04:43:53.041946 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.041954 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:53.041960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:53.042027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:53.068705 1620518 cri.go:89] found id: ""
	I1209 04:43:53.068719 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.068725 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:53.068733 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:53.068749 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:53.097490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:53.097506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:53.162858 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:53.162879 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:53.177170 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:53.177185 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:53.240297 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:53.240307 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:53.240320 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:55.810542 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:55.820923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:55.820985 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:55.860409 1620518 cri.go:89] found id: ""
	I1209 04:43:55.860422 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.860429 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:55.860434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:55.860491 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:55.895639 1620518 cri.go:89] found id: ""
	I1209 04:43:55.895653 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.895660 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:55.895665 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:55.895729 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:55.922274 1620518 cri.go:89] found id: ""
	I1209 04:43:55.922289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.922297 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:55.922302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:55.922366 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:55.948415 1620518 cri.go:89] found id: ""
	I1209 04:43:55.948437 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.948444 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:55.948448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:55.948509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:55.977442 1620518 cri.go:89] found id: ""
	I1209 04:43:55.977456 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.977463 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:55.977468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:55.977525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:56.006812 1620518 cri.go:89] found id: ""
	I1209 04:43:56.006827 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.006835 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:56.006841 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:56.006920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:56.035113 1620518 cri.go:89] found id: ""
	I1209 04:43:56.035128 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.035135 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:56.035143 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:56.035161 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:56.108405 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:56.108424 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:56.108435 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:56.178263 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:56.178284 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:56.211498 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:56.211513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:56.278845 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:56.278867 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:58.794283 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:58.804745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:58.804805 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:58.836463 1620518 cri.go:89] found id: ""
	I1209 04:43:58.836482 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.836489 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:58.836494 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:58.836551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:58.871008 1620518 cri.go:89] found id: ""
	I1209 04:43:58.871021 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.871028 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:58.871033 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:58.871096 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:58.904275 1620518 cri.go:89] found id: ""
	I1209 04:43:58.904289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.904296 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:58.904301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:58.904363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:58.934333 1620518 cri.go:89] found id: ""
	I1209 04:43:58.934346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.934353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:58.934361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:58.934418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:58.961476 1620518 cri.go:89] found id: ""
	I1209 04:43:58.961490 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.961497 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:58.961503 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:58.961562 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:58.987249 1620518 cri.go:89] found id: ""
	I1209 04:43:58.987263 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.987270 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:58.987276 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:58.987335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:59.015314 1620518 cri.go:89] found id: ""
	I1209 04:43:59.015328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:59.015335 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:59.015342 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:59.015353 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:59.079415 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:59.079425 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:59.079436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:59.150742 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:59.150761 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:59.180649 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:59.180665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:59.248002 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:59.248020 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:01.763804 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:01.774240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:01.774302 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:01.802786 1620518 cri.go:89] found id: ""
	I1209 04:44:01.802800 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.802808 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:01.802813 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:01.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:01.842779 1620518 cri.go:89] found id: ""
	I1209 04:44:01.842794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.842801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:01.842806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:01.842867 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:01.874062 1620518 cri.go:89] found id: ""
	I1209 04:44:01.874081 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.874088 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:01.874093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:01.874157 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:01.903692 1620518 cri.go:89] found id: ""
	I1209 04:44:01.903706 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.903713 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:01.903718 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:01.903777 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:01.933430 1620518 cri.go:89] found id: ""
	I1209 04:44:01.933444 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.933451 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:01.933456 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:01.933515 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:01.961286 1620518 cri.go:89] found id: ""
	I1209 04:44:01.961300 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.961307 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:01.961313 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:01.961373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:01.990521 1620518 cri.go:89] found id: ""
	I1209 04:44:01.990535 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.990542 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:01.990550 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:01.990561 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:02.008959 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:02.008977 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:02.076349 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:02.076359 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:02.076370 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:02.144940 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:02.144960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:02.175776 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:02.175793 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:04.751592 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:04.762232 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:04.762298 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:04.788096 1620518 cri.go:89] found id: ""
	I1209 04:44:04.788110 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.788117 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:04.788122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:04.788184 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:04.829955 1620518 cri.go:89] found id: ""
	I1209 04:44:04.829969 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.829975 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:04.829981 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:04.830037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:04.869304 1620518 cri.go:89] found id: ""
	I1209 04:44:04.869318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.869325 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:04.869330 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:04.869389 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:04.900033 1620518 cri.go:89] found id: ""
	I1209 04:44:04.900048 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.900054 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:04.900060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:04.900118 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:04.926358 1620518 cri.go:89] found id: ""
	I1209 04:44:04.926373 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.926381 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:04.926386 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:04.926446 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:04.952219 1620518 cri.go:89] found id: ""
	I1209 04:44:04.952233 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.952240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:04.952245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:04.952318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:04.981606 1620518 cri.go:89] found id: ""
	I1209 04:44:04.981633 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.981640 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:04.981648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:04.981659 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:05.054363 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:05.054374 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:05.054384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:05.123486 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:05.123508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:05.153591 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:05.153609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:05.220156 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:05.220176 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:07.735728 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:07.746784 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:07.746849 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:07.773633 1620518 cri.go:89] found id: ""
	I1209 04:44:07.773646 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.773653 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:07.773658 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:07.773714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:07.799209 1620518 cri.go:89] found id: ""
	I1209 04:44:07.799222 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.799230 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:07.799235 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:07.799289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:07.833034 1620518 cri.go:89] found id: ""
	I1209 04:44:07.833047 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.833055 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:07.833060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:07.833117 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:07.861960 1620518 cri.go:89] found id: ""
	I1209 04:44:07.861979 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.861986 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:07.861991 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:07.862048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:07.891370 1620518 cri.go:89] found id: ""
	I1209 04:44:07.891384 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.891392 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:07.891398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:07.891499 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:07.925093 1620518 cri.go:89] found id: ""
	I1209 04:44:07.925106 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.925113 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:07.925119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:07.925179 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:07.953814 1620518 cri.go:89] found id: ""
	I1209 04:44:07.953828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.953845 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:07.953853 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:07.953863 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:08.019480 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:08.019500 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:08.035405 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:08.035420 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:08.103942 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:08.103951 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:08.103964 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:08.173425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:08.173447 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:10.707757 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:10.717859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:10.717922 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:10.743691 1620518 cri.go:89] found id: ""
	I1209 04:44:10.743705 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.743712 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:10.743717 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:10.743775 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:10.769622 1620518 cri.go:89] found id: ""
	I1209 04:44:10.769636 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.769643 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:10.769648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:10.769707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:10.802785 1620518 cri.go:89] found id: ""
	I1209 04:44:10.802798 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.802806 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:10.802811 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:10.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:10.833564 1620518 cri.go:89] found id: ""
	I1209 04:44:10.833579 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.833587 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:10.833592 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:10.833655 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:10.876749 1620518 cri.go:89] found id: ""
	I1209 04:44:10.876763 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.876770 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:10.876775 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:10.876832 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:10.907080 1620518 cri.go:89] found id: ""
	I1209 04:44:10.907093 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.907101 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:10.907106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:10.907164 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:10.932888 1620518 cri.go:89] found id: ""
	I1209 04:44:10.932903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.932910 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:10.932918 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:10.932928 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:10.998090 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:10.998113 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:11.016501 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:11.016518 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:11.083628 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:11.083645 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:11.083658 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:11.151855 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:11.151878 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:13.684470 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:13.694706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:13.694766 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:13.720935 1620518 cri.go:89] found id: ""
	I1209 04:44:13.720948 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.720955 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:13.720960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:13.721016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:13.750286 1620518 cri.go:89] found id: ""
	I1209 04:44:13.750299 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.750306 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:13.750314 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:13.750372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:13.774808 1620518 cri.go:89] found id: ""
	I1209 04:44:13.774822 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.774831 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:13.774836 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:13.774909 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:13.800153 1620518 cri.go:89] found id: ""
	I1209 04:44:13.800167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.800174 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:13.800180 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:13.800237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:13.833377 1620518 cri.go:89] found id: ""
	I1209 04:44:13.833402 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.833409 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:13.833415 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:13.833487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:13.863754 1620518 cri.go:89] found id: ""
	I1209 04:44:13.863767 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.863774 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:13.863780 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:13.863836 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:13.899969 1620518 cri.go:89] found id: ""
	I1209 04:44:13.899983 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.899990 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:13.899997 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:13.900008 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:13.964963 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:13.964983 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:13.980119 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:13.980136 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:14.051622 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:14.051632 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:14.051644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:14.120152 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:14.120171 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:16.651342 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:16.661695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:16.661769 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:16.688695 1620518 cri.go:89] found id: ""
	I1209 04:44:16.688709 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.688717 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:16.688724 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:16.688783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:16.714481 1620518 cri.go:89] found id: ""
	I1209 04:44:16.714495 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.714502 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:16.714507 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:16.714563 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:16.740966 1620518 cri.go:89] found id: ""
	I1209 04:44:16.740980 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.740987 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:16.740992 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:16.741048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:16.772332 1620518 cri.go:89] found id: ""
	I1209 04:44:16.772346 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.772353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:16.772358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:16.772429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:16.800889 1620518 cri.go:89] found id: ""
	I1209 04:44:16.800903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.800910 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:16.800916 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:16.800979 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:16.836688 1620518 cri.go:89] found id: ""
	I1209 04:44:16.836702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.836709 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:16.836715 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:16.836779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:16.877225 1620518 cri.go:89] found id: ""
	I1209 04:44:16.877238 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.877245 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:16.877253 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:16.877263 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:16.947272 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:16.947292 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:16.964059 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:16.964075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:17.033163 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:17.033172 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:17.033183 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:17.101285 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:17.101306 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:19.635736 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:19.645923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:19.645987 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:19.671893 1620518 cri.go:89] found id: ""
	I1209 04:44:19.671907 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.671913 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:19.671918 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:19.671975 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:19.697145 1620518 cri.go:89] found id: ""
	I1209 04:44:19.697159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.697166 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:19.697171 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:19.697228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:19.726050 1620518 cri.go:89] found id: ""
	I1209 04:44:19.726064 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.726072 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:19.726077 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:19.726135 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:19.753277 1620518 cri.go:89] found id: ""
	I1209 04:44:19.753290 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.753297 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:19.753302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:19.753364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:19.778375 1620518 cri.go:89] found id: ""
	I1209 04:44:19.778388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.778395 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:19.778410 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:19.778483 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:19.803668 1620518 cri.go:89] found id: ""
	I1209 04:44:19.803682 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.803690 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:19.803695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:19.803757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:19.841128 1620518 cri.go:89] found id: ""
	I1209 04:44:19.841142 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.841149 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:19.841157 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:19.841167 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:19.917953 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:19.917972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:19.933437 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:19.933455 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:20.001189 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:20.001200 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:20.001214 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:20.072973 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:20.072992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:22.607218 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:22.618312 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:22.618373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:22.644573 1620518 cri.go:89] found id: ""
	I1209 04:44:22.644587 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.644594 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:22.644600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:22.644669 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:22.671737 1620518 cri.go:89] found id: ""
	I1209 04:44:22.671751 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.671758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:22.671763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:22.671819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:22.697372 1620518 cri.go:89] found id: ""
	I1209 04:44:22.697386 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.697393 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:22.697398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:22.697456 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:22.724412 1620518 cri.go:89] found id: ""
	I1209 04:44:22.724428 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.724436 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:22.724448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:22.724512 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:22.757520 1620518 cri.go:89] found id: ""
	I1209 04:44:22.757533 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.757551 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:22.757556 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:22.757623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:22.787926 1620518 cri.go:89] found id: ""
	I1209 04:44:22.787939 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.787946 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:22.787951 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:22.788014 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:22.813253 1620518 cri.go:89] found id: ""
	I1209 04:44:22.813267 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.813284 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:22.813292 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:22.813303 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:22.889757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:22.889776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:22.905834 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:22.905850 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:22.976939 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:22.976949 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:22.976960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:23.044862 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:23.044881 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:25.578382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:25.589220 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:25.589287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:25.617854 1620518 cri.go:89] found id: ""
	I1209 04:44:25.617868 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.617875 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:25.617880 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:25.617937 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:25.642864 1620518 cri.go:89] found id: ""
	I1209 04:44:25.642883 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.642890 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:25.642895 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:25.642952 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:25.670199 1620518 cri.go:89] found id: ""
	I1209 04:44:25.670213 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.670220 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:25.670225 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:25.670283 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:25.697688 1620518 cri.go:89] found id: ""
	I1209 04:44:25.697702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.697720 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:25.697725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:25.697827 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:25.723203 1620518 cri.go:89] found id: ""
	I1209 04:44:25.723218 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.723225 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:25.723230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:25.723287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:25.752776 1620518 cri.go:89] found id: ""
	I1209 04:44:25.752790 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.752798 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:25.752803 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:25.752866 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:25.778450 1620518 cri.go:89] found id: ""
	I1209 04:44:25.778474 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.778483 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:25.778490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:25.778501 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:25.846732 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:25.846750 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:25.863685 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:25.863701 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:25.940317 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:25.940328 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:25.940339 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:26.013087 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:26.013109 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.543111 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:28.553653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:28.553717 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:28.579452 1620518 cri.go:89] found id: ""
	I1209 04:44:28.579465 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.579472 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:28.579478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:28.579542 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:28.605894 1620518 cri.go:89] found id: ""
	I1209 04:44:28.605909 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.605916 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:28.605921 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:28.605983 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:28.633021 1620518 cri.go:89] found id: ""
	I1209 04:44:28.633044 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.633051 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:28.633057 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:28.633129 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:28.657926 1620518 cri.go:89] found id: ""
	I1209 04:44:28.657946 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.657953 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:28.657959 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:28.658027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:28.685339 1620518 cri.go:89] found id: ""
	I1209 04:44:28.685353 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.685360 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:28.685366 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:28.685433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:28.718472 1620518 cri.go:89] found id: ""
	I1209 04:44:28.718485 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.718492 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:28.718498 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:28.718554 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:28.748502 1620518 cri.go:89] found id: ""
	I1209 04:44:28.748516 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.748523 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:28.748531 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:28.748543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:28.763578 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:28.763594 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:28.830210 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:28.830220 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:28.830231 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:28.905378 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:28.905401 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.934445 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:28.934466 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.501091 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:31.511589 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:31.511662 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:31.537954 1620518 cri.go:89] found id: ""
	I1209 04:44:31.537967 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.537974 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:31.537979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:31.538035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:31.563399 1620518 cri.go:89] found id: ""
	I1209 04:44:31.563412 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.563419 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:31.563424 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:31.563481 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:31.590727 1620518 cri.go:89] found id: ""
	I1209 04:44:31.590741 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.590748 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:31.590753 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:31.590817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:31.619991 1620518 cri.go:89] found id: ""
	I1209 04:44:31.620004 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.620012 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:31.620017 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:31.620073 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:31.646682 1620518 cri.go:89] found id: ""
	I1209 04:44:31.646695 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.646703 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:31.646709 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:31.646783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:31.676240 1620518 cri.go:89] found id: ""
	I1209 04:44:31.676254 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.676261 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:31.676266 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:31.676324 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:31.701874 1620518 cri.go:89] found id: ""
	I1209 04:44:31.701898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.701906 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:31.701914 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:31.701924 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:31.729913 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:31.729929 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.795202 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:31.795222 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:31.810455 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:31.810471 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:31.910056 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:31.910067 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:31.910079 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.486956 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:34.497309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:34.497372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:34.523236 1620518 cri.go:89] found id: ""
	I1209 04:44:34.523250 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.523257 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:34.523262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:34.523320 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:34.549906 1620518 cri.go:89] found id: ""
	I1209 04:44:34.549920 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.549935 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:34.549940 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:34.549997 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:34.577694 1620518 cri.go:89] found id: ""
	I1209 04:44:34.577708 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.577716 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:34.577721 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:34.577781 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:34.604297 1620518 cri.go:89] found id: ""
	I1209 04:44:34.604311 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.604319 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:34.604325 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:34.604388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:34.629233 1620518 cri.go:89] found id: ""
	I1209 04:44:34.629249 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.629257 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:34.629262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:34.629330 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:34.659380 1620518 cri.go:89] found id: ""
	I1209 04:44:34.659394 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.659401 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:34.659407 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:34.659466 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:34.688342 1620518 cri.go:89] found id: ""
	I1209 04:44:34.688356 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.688363 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:34.688370 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:34.688383 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:34.703538 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:34.703555 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:34.766893 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:34.766907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:34.766925 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.835016 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:34.835035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:34.867468 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:34.867484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.441777 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:37.452150 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:37.452220 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:37.477442 1620518 cri.go:89] found id: ""
	I1209 04:44:37.477456 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.477463 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:37.477468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:37.477525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:37.503669 1620518 cri.go:89] found id: ""
	I1209 04:44:37.503683 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.503690 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:37.503696 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:37.503756 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:37.529304 1620518 cri.go:89] found id: ""
	I1209 04:44:37.529318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.529326 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:37.529331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:37.529388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:37.555509 1620518 cri.go:89] found id: ""
	I1209 04:44:37.555523 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.555539 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:37.555545 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:37.555603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:37.581297 1620518 cri.go:89] found id: ""
	I1209 04:44:37.581310 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.581328 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:37.581334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:37.581403 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:37.607757 1620518 cri.go:89] found id: ""
	I1209 04:44:37.607774 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.607781 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:37.607787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:37.607863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:37.634135 1620518 cri.go:89] found id: ""
	I1209 04:44:37.634159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.634167 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:37.634174 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:37.634187 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:37.698412 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:37.698423 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:37.698434 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:37.765691 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:37.765711 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:37.794807 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:37.794822 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.865591 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:37.865609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.382843 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:40.393026 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:40.393086 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:40.417900 1620518 cri.go:89] found id: ""
	I1209 04:44:40.417913 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.417920 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:40.417926 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:40.417984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:40.447221 1620518 cri.go:89] found id: ""
	I1209 04:44:40.447235 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.447242 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:40.447247 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:40.447305 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:40.472564 1620518 cri.go:89] found id: ""
	I1209 04:44:40.472578 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.472585 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:40.472591 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:40.472651 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:40.498097 1620518 cri.go:89] found id: ""
	I1209 04:44:40.498111 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.498118 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:40.498123 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:40.498182 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:40.523258 1620518 cri.go:89] found id: ""
	I1209 04:44:40.523271 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.523279 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:40.523287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:40.523343 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:40.548390 1620518 cri.go:89] found id: ""
	I1209 04:44:40.548404 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.548411 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:40.548417 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:40.548475 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:40.573171 1620518 cri.go:89] found id: ""
	I1209 04:44:40.573185 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.573192 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:40.573199 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:40.573211 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.587922 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:40.587937 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:40.648925 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:40.648934 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:40.648945 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:40.721024 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:40.721047 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:40.756647 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:40.756664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.325607 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:43.335615 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:43.335677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:43.365344 1620518 cri.go:89] found id: ""
	I1209 04:44:43.365360 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.365367 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:43.365373 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:43.365432 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:43.391751 1620518 cri.go:89] found id: ""
	I1209 04:44:43.391764 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.391772 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:43.391783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:43.391843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:43.417345 1620518 cri.go:89] found id: ""
	I1209 04:44:43.417359 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.417366 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:43.417372 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:43.417433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:43.444314 1620518 cri.go:89] found id: ""
	I1209 04:44:43.444328 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.444335 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:43.444341 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:43.444402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:43.473635 1620518 cri.go:89] found id: ""
	I1209 04:44:43.473649 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.473656 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:43.473661 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:43.473721 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:43.499726 1620518 cri.go:89] found id: ""
	I1209 04:44:43.499740 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.499747 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:43.499752 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:43.499812 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:43.526373 1620518 cri.go:89] found id: ""
	I1209 04:44:43.526388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.526396 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:43.526404 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:43.526415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.591625 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:43.591644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:43.606802 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:43.606818 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:43.671535 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:43.671545 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:43.671556 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:43.742830 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:43.742849 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.272131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:46.282533 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:46.282611 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:46.307629 1620518 cri.go:89] found id: ""
	I1209 04:44:46.307644 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.307652 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:46.307657 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:46.307718 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:46.334241 1620518 cri.go:89] found id: ""
	I1209 04:44:46.334255 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.334262 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:46.334267 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:46.334326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:46.360606 1620518 cri.go:89] found id: ""
	I1209 04:44:46.360619 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.360627 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:46.360632 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:46.360693 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:46.391930 1620518 cri.go:89] found id: ""
	I1209 04:44:46.391944 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.391951 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:46.391956 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:46.392018 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:46.418088 1620518 cri.go:89] found id: ""
	I1209 04:44:46.418102 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.418109 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:46.418114 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:46.418173 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:46.444114 1620518 cri.go:89] found id: ""
	I1209 04:44:46.444129 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.444135 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:46.444141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:46.444202 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:46.469066 1620518 cri.go:89] found id: ""
	I1209 04:44:46.469079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.469096 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:46.469105 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:46.469116 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:46.535118 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:46.535128 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:46.535140 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:46.603490 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:46.603513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.633565 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:46.633582 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:46.707757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:46.707778 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.223668 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:49.233804 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:49.233863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:49.262060 1620518 cri.go:89] found id: ""
	I1209 04:44:49.262074 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.262081 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:49.262087 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:49.262146 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:49.288289 1620518 cri.go:89] found id: ""
	I1209 04:44:49.288303 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.288310 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:49.288315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:49.288372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:49.317469 1620518 cri.go:89] found id: ""
	I1209 04:44:49.317482 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.317489 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:49.317495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:49.317553 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:49.343598 1620518 cri.go:89] found id: ""
	I1209 04:44:49.343612 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.343619 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:49.343624 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:49.343682 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:49.369884 1620518 cri.go:89] found id: ""
	I1209 04:44:49.369898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.369905 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:49.369910 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:49.369968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:49.397485 1620518 cri.go:89] found id: ""
	I1209 04:44:49.397499 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.397506 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:49.397512 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:49.397576 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:49.426780 1620518 cri.go:89] found id: ""
	I1209 04:44:49.426794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.426802 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:49.426810 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:49.426820 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:49.455508 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:49.455524 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:49.521613 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:49.521632 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.537098 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:49.537115 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:49.604403 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:49.604415 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:49.604427 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.175474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:52.185416 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:52.185490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:52.210165 1620518 cri.go:89] found id: ""
	I1209 04:44:52.210179 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.210186 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:52.210191 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:52.210250 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:52.235252 1620518 cri.go:89] found id: ""
	I1209 04:44:52.235265 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.235272 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:52.235277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:52.235335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:52.260814 1620518 cri.go:89] found id: ""
	I1209 04:44:52.260828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.260835 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:52.260840 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:52.260899 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:52.287596 1620518 cri.go:89] found id: ""
	I1209 04:44:52.287609 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.287616 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:52.287621 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:52.287677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:52.315049 1620518 cri.go:89] found id: ""
	I1209 04:44:52.315062 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.315069 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:52.315075 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:52.315139 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:52.339741 1620518 cri.go:89] found id: ""
	I1209 04:44:52.339755 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.339762 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:52.339767 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:52.339825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:52.369959 1620518 cri.go:89] found id: ""
	I1209 04:44:52.369973 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.369981 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:52.369988 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:52.369998 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:52.442787 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:52.442797 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:52.442807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.511615 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:52.511634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:52.542801 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:52.542817 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:52.608882 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:52.608904 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.125120 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:55.135789 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:55.135848 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:55.162401 1620518 cri.go:89] found id: ""
	I1209 04:44:55.162416 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.162423 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:55.162428 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:55.162487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:55.190716 1620518 cri.go:89] found id: ""
	I1209 04:44:55.190730 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.190736 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:55.190742 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:55.190799 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:55.216812 1620518 cri.go:89] found id: ""
	I1209 04:44:55.216825 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.216832 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:55.216839 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:55.216896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:55.241064 1620518 cri.go:89] found id: ""
	I1209 04:44:55.241079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.241086 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:55.241092 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:55.241148 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:55.270237 1620518 cri.go:89] found id: ""
	I1209 04:44:55.270251 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.270258 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:55.270263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:55.270322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:55.296228 1620518 cri.go:89] found id: ""
	I1209 04:44:55.296242 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.296249 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:55.296254 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:55.296315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:55.322153 1620518 cri.go:89] found id: ""
	I1209 04:44:55.322167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.322174 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:55.322181 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:55.322192 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:55.390665 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:55.390684 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.405506 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:55.405523 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:55.471951 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:55.471960 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:55.471972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:55.542641 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:55.542662 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.078721 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:58.089961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:58.090029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:58.117883 1620518 cri.go:89] found id: ""
	I1209 04:44:58.117896 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.117902 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:58.117908 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:58.117968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:58.150212 1620518 cri.go:89] found id: ""
	I1209 04:44:58.150226 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.150233 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:58.150238 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:58.150296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:58.177448 1620518 cri.go:89] found id: ""
	I1209 04:44:58.177462 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.177469 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:58.177474 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:58.177533 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:58.203663 1620518 cri.go:89] found id: ""
	I1209 04:44:58.203676 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.203683 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:58.203688 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:58.203779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:58.229153 1620518 cri.go:89] found id: ""
	I1209 04:44:58.229167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.229174 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:58.229179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:58.229237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:58.253337 1620518 cri.go:89] found id: ""
	I1209 04:44:58.253365 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.253372 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:58.253377 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:58.253433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:58.279202 1620518 cri.go:89] found id: ""
	I1209 04:44:58.279215 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.279222 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:58.279230 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:58.279240 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:58.352607 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:58.352626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.380559 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:58.380575 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:58.450340 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:58.450359 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:58.466733 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:58.466753 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:58.539538 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.039807 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:01.051635 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:01.051699 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:01.081094 1620518 cri.go:89] found id: ""
	I1209 04:45:01.081120 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.081132 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:01.081138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:01.081216 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:01.110254 1620518 cri.go:89] found id: ""
	I1209 04:45:01.110270 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.110277 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:01.110282 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:01.110348 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:01.142200 1620518 cri.go:89] found id: ""
	I1209 04:45:01.142217 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.142224 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:01.142230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:01.142295 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:01.173624 1620518 cri.go:89] found id: ""
	I1209 04:45:01.173640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.173647 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:01.173653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:01.173714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:01.200655 1620518 cri.go:89] found id: ""
	I1209 04:45:01.200669 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.200676 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:01.200681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:01.200753 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:01.228245 1620518 cri.go:89] found id: ""
	I1209 04:45:01.228260 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.228268 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:01.228274 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:01.228344 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:01.255910 1620518 cri.go:89] found id: ""
	I1209 04:45:01.255924 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.255932 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:01.255941 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:01.255955 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:01.272811 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:01.272829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:01.345905 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.345916 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:01.345926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:01.428612 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:01.428634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:01.462789 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:01.462805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.036441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:04.048197 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:04.048263 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:04.077328 1620518 cri.go:89] found id: ""
	I1209 04:45:04.077347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.077354 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:04.077361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:04.077424 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:04.105221 1620518 cri.go:89] found id: ""
	I1209 04:45:04.105235 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.105243 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:04.105249 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:04.105315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:04.136847 1620518 cri.go:89] found id: ""
	I1209 04:45:04.136860 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.136868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:04.136873 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:04.136934 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:04.167906 1620518 cri.go:89] found id: ""
	I1209 04:45:04.167920 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.167930 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:04.167936 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:04.168012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:04.198111 1620518 cri.go:89] found id: ""
	I1209 04:45:04.198126 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.198133 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:04.198139 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:04.198201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:04.228375 1620518 cri.go:89] found id: ""
	I1209 04:45:04.228389 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.228396 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:04.228402 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:04.228460 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:04.255398 1620518 cri.go:89] found id: ""
	I1209 04:45:04.255411 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.255418 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:04.255425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:04.255436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:04.285882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:04.285898 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.352741 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:04.352763 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:04.369185 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:04.369202 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:04.440688 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:04.440698 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:04.440710 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.013764 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:07.024294 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:07.024356 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:07.050143 1620518 cri.go:89] found id: ""
	I1209 04:45:07.050157 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.050164 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:07.050170 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:07.050240 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:07.076876 1620518 cri.go:89] found id: ""
	I1209 04:45:07.076890 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.076897 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:07.076902 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:07.076957 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:07.102491 1620518 cri.go:89] found id: ""
	I1209 04:45:07.102505 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.102512 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:07.102517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:07.102597 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:07.132406 1620518 cri.go:89] found id: ""
	I1209 04:45:07.132421 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.132428 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:07.132432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:07.132489 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:07.158308 1620518 cri.go:89] found id: ""
	I1209 04:45:07.158322 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.158329 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:07.158334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:07.158394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:07.185219 1620518 cri.go:89] found id: ""
	I1209 04:45:07.185232 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.185240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:07.185245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:07.185304 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:07.211200 1620518 cri.go:89] found id: ""
	I1209 04:45:07.211213 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.211220 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:07.211227 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:07.211239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.279098 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:07.279117 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:07.307654 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:07.307669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:07.380382 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:07.380406 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:07.396198 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:07.396216 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:07.463840 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:09.964491 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:09.974856 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:09.974917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:10.013610 1620518 cri.go:89] found id: ""
	I1209 04:45:10.013627 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.013635 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:10.013641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:10.013710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:10.041923 1620518 cri.go:89] found id: ""
	I1209 04:45:10.041937 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.041945 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:10.041950 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:10.042012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:10.070273 1620518 cri.go:89] found id: ""
	I1209 04:45:10.070287 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.070295 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:10.070306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:10.070365 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:10.101336 1620518 cri.go:89] found id: ""
	I1209 04:45:10.101350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.101357 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:10.101362 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:10.101423 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:10.129685 1620518 cri.go:89] found id: ""
	I1209 04:45:10.129699 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.129706 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:10.129711 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:10.129770 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:10.157137 1620518 cri.go:89] found id: ""
	I1209 04:45:10.157151 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.157158 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:10.157164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:10.157223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:10.186869 1620518 cri.go:89] found id: ""
	I1209 04:45:10.186883 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.186891 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:10.186898 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:10.186912 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:10.217015 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:10.217032 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:10.284415 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:10.284437 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:10.299713 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:10.299729 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:10.383660 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:10.383683 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:10.383695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:12.956212 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:12.967122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:12.967187 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:12.992647 1620518 cri.go:89] found id: ""
	I1209 04:45:12.992661 1620518 logs.go:282] 0 containers: []
	W1209 04:45:12.992667 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:12.992673 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:12.992731 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:13.024601 1620518 cri.go:89] found id: ""
	I1209 04:45:13.024616 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.024623 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:13.024628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:13.024689 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:13.054508 1620518 cri.go:89] found id: ""
	I1209 04:45:13.054522 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.054529 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:13.054534 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:13.054612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:13.080662 1620518 cri.go:89] found id: ""
	I1209 04:45:13.080681 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.080688 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:13.080693 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:13.080750 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:13.112334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.112347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.112354 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:13.112363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:13.112421 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:13.141334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.141348 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.141355 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:13.141360 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:13.141433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:13.166692 1620518 cri.go:89] found id: ""
	I1209 04:45:13.166706 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.166713 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:13.166721 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:13.166735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:13.230693 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:13.230703 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:13.230718 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:13.299665 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:13.299685 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:13.343575 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:13.343591 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:13.418530 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:13.418550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:15.934049 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:15.944397 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:15.944459 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:15.969801 1620518 cri.go:89] found id: ""
	I1209 04:45:15.969814 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.969821 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:15.969827 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:15.969886 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:15.995679 1620518 cri.go:89] found id: ""
	I1209 04:45:15.995693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.995700 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:15.995705 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:15.995761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:16.029078 1620518 cri.go:89] found id: ""
	I1209 04:45:16.029092 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.029100 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:16.029105 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:16.029167 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:16.057686 1620518 cri.go:89] found id: ""
	I1209 04:45:16.057700 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.057707 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:16.057712 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:16.057773 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:16.085790 1620518 cri.go:89] found id: ""
	I1209 04:45:16.085804 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.085811 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:16.085816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:16.085876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:16.112272 1620518 cri.go:89] found id: ""
	I1209 04:45:16.112288 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.112295 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:16.112301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:16.112371 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:16.137697 1620518 cri.go:89] found id: ""
	I1209 04:45:16.137711 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.137718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:16.137726 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:16.137741 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:16.170480 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:16.170495 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:16.235651 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:16.235671 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:16.250648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:16.250664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:16.313079 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:16.313088 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:16.313099 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:18.888938 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:18.899614 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:18.899678 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:18.925762 1620518 cri.go:89] found id: ""
	I1209 04:45:18.925775 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.925782 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:18.925787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:18.925843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:18.952615 1620518 cri.go:89] found id: ""
	I1209 04:45:18.952629 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.952636 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:18.952641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:18.952703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:18.978511 1620518 cri.go:89] found id: ""
	I1209 04:45:18.978525 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.978532 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:18.978537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:18.978620 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:19.007151 1620518 cri.go:89] found id: ""
	I1209 04:45:19.007166 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.007173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:19.007183 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:19.007244 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:19.034621 1620518 cri.go:89] found id: ""
	I1209 04:45:19.034635 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.034643 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:19.034648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:19.034708 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:19.063843 1620518 cri.go:89] found id: ""
	I1209 04:45:19.063856 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.063863 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:19.063868 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:19.063929 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:19.090085 1620518 cri.go:89] found id: ""
	I1209 04:45:19.090099 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.090106 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:19.090114 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:19.090125 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:19.159590 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:19.159614 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:19.159626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:19.228469 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:19.228489 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:19.257518 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:19.257534 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:19.323776 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:19.323796 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:21.846133 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:21.856537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:21.856603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:21.883050 1620518 cri.go:89] found id: ""
	I1209 04:45:21.883071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.883079 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:21.883084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:21.883144 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:21.909529 1620518 cri.go:89] found id: ""
	I1209 04:45:21.909544 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.909551 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:21.909557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:21.909616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:21.935426 1620518 cri.go:89] found id: ""
	I1209 04:45:21.935440 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.935447 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:21.935452 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:21.935513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:21.964269 1620518 cri.go:89] found id: ""
	I1209 04:45:21.964283 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.964290 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:21.964295 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:21.964351 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:21.991621 1620518 cri.go:89] found id: ""
	I1209 04:45:21.991637 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.991644 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:21.991650 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:21.991710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:22.018422 1620518 cri.go:89] found id: ""
	I1209 04:45:22.018437 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.018445 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:22.018450 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:22.018510 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:22.045499 1620518 cri.go:89] found id: ""
	I1209 04:45:22.045514 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.045522 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:22.045529 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:22.045541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:22.111892 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:22.111907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:22.111923 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:22.180045 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:22.180065 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:22.210199 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:22.210215 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:22.276418 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:22.276439 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:24.791989 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:24.802138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:24.802199 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:24.830421 1620518 cri.go:89] found id: ""
	I1209 04:45:24.830434 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.830441 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:24.830446 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:24.830509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:24.855641 1620518 cri.go:89] found id: ""
	I1209 04:45:24.855653 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.855661 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:24.855666 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:24.855723 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:24.882261 1620518 cri.go:89] found id: ""
	I1209 04:45:24.882275 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.882282 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:24.882287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:24.882346 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:24.909451 1620518 cri.go:89] found id: ""
	I1209 04:45:24.909465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.909472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:24.909477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:24.909538 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:24.935023 1620518 cri.go:89] found id: ""
	I1209 04:45:24.935036 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.935043 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:24.935048 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:24.935105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:24.965362 1620518 cri.go:89] found id: ""
	I1209 04:45:24.965375 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.965390 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:24.965396 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:24.965454 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:24.993349 1620518 cri.go:89] found id: ""
	I1209 04:45:24.993362 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.993369 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:24.993377 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:24.993387 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.060817 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.060841 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.077397 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.077415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.149136 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.149146 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:25.149157 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:25.218866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.218886 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:27.749537 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:27.760277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:27.760345 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:27.786623 1620518 cri.go:89] found id: ""
	I1209 04:45:27.786636 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.786643 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:27.786648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:27.786705 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:27.813156 1620518 cri.go:89] found id: ""
	I1209 04:45:27.813169 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.813176 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:27.813181 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:27.813238 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:27.838803 1620518 cri.go:89] found id: ""
	I1209 04:45:27.838817 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.838824 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:27.838835 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:27.838896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:27.865975 1620518 cri.go:89] found id: ""
	I1209 04:45:27.865988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.865996 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:27.866001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:27.866058 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:27.891739 1620518 cri.go:89] found id: ""
	I1209 04:45:27.891753 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.891761 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:27.891766 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:27.891825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:27.922057 1620518 cri.go:89] found id: ""
	I1209 04:45:27.922071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.922079 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:27.922084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:27.922143 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:27.947345 1620518 cri.go:89] found id: ""
	I1209 04:45:27.947359 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.947366 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:27.947373 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:27.947384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:28.018760 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:28.018788 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:28.035483 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:28.035508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:28.104231 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:28.104241 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:28.104253 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:28.173176 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:28.173196 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:30.707635 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:30.717972 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:30.718036 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:30.743336 1620518 cri.go:89] found id: ""
	I1209 04:45:30.743350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.743357 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:30.743363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:30.743420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:30.768727 1620518 cri.go:89] found id: ""
	I1209 04:45:30.768741 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.768748 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:30.768754 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:30.768811 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:30.797959 1620518 cri.go:89] found id: ""
	I1209 04:45:30.797973 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.797980 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:30.797985 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:30.798046 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:30.825422 1620518 cri.go:89] found id: ""
	I1209 04:45:30.825435 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.825442 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:30.825448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:30.825506 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:30.854265 1620518 cri.go:89] found id: ""
	I1209 04:45:30.854278 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.854285 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:30.854290 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:30.854347 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:30.880403 1620518 cri.go:89] found id: ""
	I1209 04:45:30.880418 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.880426 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:30.880432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:30.880494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:30.913767 1620518 cri.go:89] found id: ""
	I1209 04:45:30.913781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.913789 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:30.913796 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:30.913807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:30.980378 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:30.980398 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:30.995822 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:30.995838 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:31.066169 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:31.066179 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:31.066190 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:31.138123 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:31.138142 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:33.670737 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:33.681036 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:33.681099 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:33.709926 1620518 cri.go:89] found id: ""
	I1209 04:45:33.709939 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.709947 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:33.709963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:33.710023 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:33.737554 1620518 cri.go:89] found id: ""
	I1209 04:45:33.737567 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.737574 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:33.737579 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:33.737640 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:33.763709 1620518 cri.go:89] found id: ""
	I1209 04:45:33.763723 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.763731 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:33.763736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:33.763794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:33.792885 1620518 cri.go:89] found id: ""
	I1209 04:45:33.792899 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.792906 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:33.792912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:33.792971 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:33.818657 1620518 cri.go:89] found id: ""
	I1209 04:45:33.818671 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.818678 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:33.818683 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:33.818741 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:33.845152 1620518 cri.go:89] found id: ""
	I1209 04:45:33.845167 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.845174 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:33.845179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:33.845237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:33.871504 1620518 cri.go:89] found id: ""
	I1209 04:45:33.871517 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.871524 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:33.871532 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:33.871543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:33.938353 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:33.938373 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:33.954248 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:33.954267 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:34.025014 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:34.025026 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:34.025038 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:34.096006 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:34.096027 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:36.630302 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:36.640925 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:36.640999 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:36.669961 1620518 cri.go:89] found id: ""
	I1209 04:45:36.669975 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.669982 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:36.669988 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:36.670044 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:36.696918 1620518 cri.go:89] found id: ""
	I1209 04:45:36.696934 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.696942 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:36.696947 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:36.697007 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:36.727113 1620518 cri.go:89] found id: ""
	I1209 04:45:36.727127 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.727136 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:36.727141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:36.727201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:36.752459 1620518 cri.go:89] found id: ""
	I1209 04:45:36.752473 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.752480 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:36.752485 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:36.752543 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:36.778403 1620518 cri.go:89] found id: ""
	I1209 04:45:36.778417 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.778425 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:36.778430 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:36.778488 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:36.809409 1620518 cri.go:89] found id: ""
	I1209 04:45:36.809423 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.809430 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:36.809436 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:36.809494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:36.838444 1620518 cri.go:89] found id: ""
	I1209 04:45:36.838457 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.838464 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:36.838472 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:36.838484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:36.853995 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:36.854011 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:36.919371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:36.919381 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:36.919395 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:36.992004 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:36.992025 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:37.033214 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:37.033230 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.602680 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:39.614476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:39.614537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:39.644626 1620518 cri.go:89] found id: ""
	I1209 04:45:39.644640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.644647 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:39.644652 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:39.644711 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:39.673317 1620518 cri.go:89] found id: ""
	I1209 04:45:39.673331 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.673338 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:39.673343 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:39.673404 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:39.699053 1620518 cri.go:89] found id: ""
	I1209 04:45:39.699067 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.699074 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:39.699079 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:39.699141 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:39.724341 1620518 cri.go:89] found id: ""
	I1209 04:45:39.724355 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.724362 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:39.724370 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:39.724429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:39.749975 1620518 cri.go:89] found id: ""
	I1209 04:45:39.749988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.749995 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:39.750001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:39.750060 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:39.774556 1620518 cri.go:89] found id: ""
	I1209 04:45:39.774588 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.774597 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:39.774602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:39.774663 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:39.800285 1620518 cri.go:89] found id: ""
	I1209 04:45:39.800299 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.800307 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:39.800314 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:39.800325 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:39.830073 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:39.830089 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.898438 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:39.898457 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:39.913743 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:39.913759 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:39.982308 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:39.982319 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:39.982332 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.561378 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:42.571315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:42.571383 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:42.602452 1620518 cri.go:89] found id: ""
	I1209 04:45:42.602466 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.602473 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:42.602478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:42.602541 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:42.634016 1620518 cri.go:89] found id: ""
	I1209 04:45:42.634029 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.634037 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:42.634042 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:42.634102 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:42.665601 1620518 cri.go:89] found id: ""
	I1209 04:45:42.665614 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.665621 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:42.665627 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:42.665683 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:42.692605 1620518 cri.go:89] found id: ""
	I1209 04:45:42.692618 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.692626 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:42.692631 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:42.692692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:42.719572 1620518 cri.go:89] found id: ""
	I1209 04:45:42.719585 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.719592 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:42.719598 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:42.719660 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:42.745298 1620518 cri.go:89] found id: ""
	I1209 04:45:42.745312 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.745319 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:42.745324 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:42.745391 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:42.770685 1620518 cri.go:89] found id: ""
	I1209 04:45:42.770698 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.770706 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:42.770714 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:42.770724 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.840866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:42.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:42.871659 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:42.871676 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:42.941154 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:42.941174 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:42.956621 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:42.956638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:43.026115 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.527782 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:45.537648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:45.537707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:45.564248 1620518 cri.go:89] found id: ""
	I1209 04:45:45.564263 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.564270 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:45.564277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:45.564337 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:45.599479 1620518 cri.go:89] found id: ""
	I1209 04:45:45.599492 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.599499 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:45.599504 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:45.599560 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:45.629541 1620518 cri.go:89] found id: ""
	I1209 04:45:45.629554 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.629563 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:45.629568 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:45.629624 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:45.660451 1620518 cri.go:89] found id: ""
	I1209 04:45:45.660465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.660472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:45.660477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:45.660537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:45.686489 1620518 cri.go:89] found id: ""
	I1209 04:45:45.686503 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.686509 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:45.686514 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:45.686616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:45.711940 1620518 cri.go:89] found id: ""
	I1209 04:45:45.711954 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.711961 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:45.711967 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:45.712025 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:45.737703 1620518 cri.go:89] found id: ""
	I1209 04:45:45.737717 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.737724 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:45.737732 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:45.737745 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:45.802439 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.802451 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:45.802474 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:45.871530 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:45.871550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:45.901994 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:45.902010 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:45.973222 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:45.973241 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.488532 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:48.499003 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:48.499072 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:48.524749 1620518 cri.go:89] found id: ""
	I1209 04:45:48.524762 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.524769 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:48.524774 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:48.524830 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:48.553895 1620518 cri.go:89] found id: ""
	I1209 04:45:48.553909 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.553917 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:48.553922 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:48.553984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:48.581047 1620518 cri.go:89] found id: ""
	I1209 04:45:48.581069 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.581078 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:48.581084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:48.581153 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:48.614680 1620518 cri.go:89] found id: ""
	I1209 04:45:48.614693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.614701 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:48.614706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:48.614774 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:48.643818 1620518 cri.go:89] found id: ""
	I1209 04:45:48.643832 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.643839 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:48.643845 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:48.643919 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:48.669618 1620518 cri.go:89] found id: ""
	I1209 04:45:48.669632 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.669642 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:48.669647 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:48.669710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:48.699049 1620518 cri.go:89] found id: ""
	I1209 04:45:48.699063 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.699070 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:48.699077 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:48.699088 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:48.731315 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:48.731331 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:48.798219 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:48.798239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.813603 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:48.813620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:48.877674 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:48.877684 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:48.877695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.447558 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:51.457634 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:51.457694 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:51.487281 1620518 cri.go:89] found id: ""
	I1209 04:45:51.487294 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.487301 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:51.487306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:51.487364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:51.518737 1620518 cri.go:89] found id: ""
	I1209 04:45:51.518751 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.518758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:51.518763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:51.518837 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:51.544469 1620518 cri.go:89] found id: ""
	I1209 04:45:51.544481 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.544488 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:51.544493 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:51.544549 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:51.569588 1620518 cri.go:89] found id: ""
	I1209 04:45:51.569602 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.569624 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:51.569628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:51.569687 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:51.612979 1620518 cri.go:89] found id: ""
	I1209 04:45:51.612992 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.612999 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:51.613004 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:51.613062 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:51.646866 1620518 cri.go:89] found id: ""
	I1209 04:45:51.646880 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.646886 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:51.646892 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:51.646954 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:51.672767 1620518 cri.go:89] found id: ""
	I1209 04:45:51.672781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.672788 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:51.672795 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:51.672805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:51.738601 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:51.738620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:51.753536 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:51.753553 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:51.823113 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:51.823124 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:51.823134 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.895060 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:51.895078 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.424057 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:54.434546 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:54.434637 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:54.461148 1620518 cri.go:89] found id: ""
	I1209 04:45:54.461161 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.461179 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:54.461185 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:54.461245 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:54.491296 1620518 cri.go:89] found id: ""
	I1209 04:45:54.491310 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.491316 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:54.491322 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:54.491377 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:54.517141 1620518 cri.go:89] found id: ""
	I1209 04:45:54.517155 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.517162 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:54.517168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:54.517228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:54.543226 1620518 cri.go:89] found id: ""
	I1209 04:45:54.543245 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.543252 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:54.543258 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:54.543318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:54.574984 1620518 cri.go:89] found id: ""
	I1209 04:45:54.574998 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.575005 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:54.575010 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:54.575069 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:54.612321 1620518 cri.go:89] found id: ""
	I1209 04:45:54.612335 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.612342 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:54.612347 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:54.612405 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:54.639817 1620518 cri.go:89] found id: ""
	I1209 04:45:54.639831 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.639839 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:54.639847 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:54.639858 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:54.704579 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:54.704588 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:54.704610 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:54.772943 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:54.772962 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.802082 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:54.802097 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:54.873250 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:54.873278 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.389092 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:57.399566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:57.399631 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:57.424671 1620518 cri.go:89] found id: ""
	I1209 04:45:57.424685 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.424692 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:57.424698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:57.424755 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:57.449520 1620518 cri.go:89] found id: ""
	I1209 04:45:57.449533 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.449549 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:57.449554 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:57.449612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:57.474934 1620518 cri.go:89] found id: ""
	I1209 04:45:57.474949 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.474956 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:57.474961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:57.475017 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:57.504272 1620518 cri.go:89] found id: ""
	I1209 04:45:57.504285 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.504292 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:57.504297 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:57.504355 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:57.530784 1620518 cri.go:89] found id: ""
	I1209 04:45:57.530797 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.530804 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:57.530820 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:57.530878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:57.556189 1620518 cri.go:89] found id: ""
	I1209 04:45:57.556202 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.556209 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:57.556214 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:57.556271 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:57.584245 1620518 cri.go:89] found id: ""
	I1209 04:45:57.584258 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.584266 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:57.584273 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:57.584286 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:57.618235 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:57.618250 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:57.693384 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:57.693403 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.708210 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:57.708227 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:57.773409 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:57.773420 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:57.773430 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:00.342809 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:00.358795 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:00.358876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:00.400877 1620518 cri.go:89] found id: ""
	I1209 04:46:00.400892 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.400900 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:00.400906 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:00.400970 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:00.431798 1620518 cri.go:89] found id: ""
	I1209 04:46:00.431813 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.431820 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:00.431828 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:00.431892 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:00.460666 1620518 cri.go:89] found id: ""
	I1209 04:46:00.460686 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.460693 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:00.460698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:00.460761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:00.488457 1620518 cri.go:89] found id: ""
	I1209 04:46:00.488471 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.488479 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:00.488484 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:00.488551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:00.517784 1620518 cri.go:89] found id: ""
	I1209 04:46:00.517797 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.517805 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:00.517810 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:00.517873 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:00.545946 1620518 cri.go:89] found id: ""
	I1209 04:46:00.545960 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.545968 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:00.545973 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:00.546035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:00.575131 1620518 cri.go:89] found id: ""
	I1209 04:46:00.575153 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.575161 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:00.575168 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:00.575179 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:00.612360 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:00.612379 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:00.689205 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:00.689224 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:00.704596 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:00.704612 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:00.770156 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:00.770165 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:00.770175 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.338719 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:03.349336 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:03.349402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:03.374937 1620518 cri.go:89] found id: ""
	I1209 04:46:03.374950 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.374957 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:03.374963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:03.375022 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:03.405176 1620518 cri.go:89] found id: ""
	I1209 04:46:03.405206 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.405213 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:03.405219 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:03.405285 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:03.434836 1620518 cri.go:89] found id: ""
	I1209 04:46:03.434860 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.434868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:03.434874 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:03.434948 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:03.464055 1620518 cri.go:89] found id: ""
	I1209 04:46:03.464077 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.464085 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:03.464090 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:03.464189 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:03.493083 1620518 cri.go:89] found id: ""
	I1209 04:46:03.493106 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.493114 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:03.493119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:03.493194 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:03.518929 1620518 cri.go:89] found id: ""
	I1209 04:46:03.518942 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.518950 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:03.518955 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:03.519016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:03.543738 1620518 cri.go:89] found id: ""
	I1209 04:46:03.543751 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.543758 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:03.543766 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:03.543776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.611972 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:03.611992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:03.644882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:03.644905 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:03.715853 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:03.715873 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:03.730852 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:03.730870 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:03.797963 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.299034 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:06.310369 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:06.310430 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:06.338011 1620518 cri.go:89] found id: ""
	I1209 04:46:06.338024 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.338031 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:06.338037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:06.338093 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:06.364537 1620518 cri.go:89] found id: ""
	I1209 04:46:06.364551 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.364558 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:06.364566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:06.364621 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:06.390874 1620518 cri.go:89] found id: ""
	I1209 04:46:06.390894 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.390907 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:06.390912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:06.390972 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:06.416068 1620518 cri.go:89] found id: ""
	I1209 04:46:06.416082 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.416088 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:06.416093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:06.416152 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:06.445711 1620518 cri.go:89] found id: ""
	I1209 04:46:06.445724 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.445731 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:06.445736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:06.445794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:06.472619 1620518 cri.go:89] found id: ""
	I1209 04:46:06.472632 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.472639 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:06.472644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:06.472704 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:06.501335 1620518 cri.go:89] found id: ""
	I1209 04:46:06.501348 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.501355 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:06.501372 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:06.501382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:06.564989 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.564998 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:06.565009 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:06.636608 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:06.636626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:06.667969 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:06.667986 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:06.734125 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:06.734145 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:09.249456 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:09.259765 1620518 kubeadm.go:602] duration metric: took 4m2.693827645s to restartPrimaryControlPlane
	W1209 04:46:09.259826 1620518 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:46:09.259905 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:46:09.672351 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:46:09.685870 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:46:09.693855 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:46:09.693913 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:46:09.701686 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:46:09.701697 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:46:09.701750 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:46:09.709486 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:46:09.709542 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:46:09.717080 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:46:09.724681 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:46:09.724735 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:46:09.732335 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.740201 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:46:09.740255 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.747717 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:46:09.755316 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:46:09.755370 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:46:09.762723 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:46:09.800341 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:46:09.800668 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:46:09.867665 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:46:09.867727 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:46:09.867766 1620518 kubeadm.go:319] OS: Linux
	I1209 04:46:09.867807 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:46:09.867852 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:46:09.867896 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:46:09.867942 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:46:09.867987 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:46:09.868032 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:46:09.868074 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:46:09.868120 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:46:09.868162 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:46:09.937281 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:46:09.937384 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:46:09.937481 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:46:09.947317 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:46:09.952721 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:46:09.952808 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:46:09.952877 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:46:09.952958 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:46:09.953021 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:46:09.953092 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:46:09.953141 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:46:09.953206 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:46:09.953269 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:46:09.953343 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:46:09.953417 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:46:09.953461 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:46:09.953513 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:46:10.029245 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:46:10.224354 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:46:10.667691 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:46:10.882600 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:46:11.073140 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:46:11.073694 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:46:11.076408 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:46:11.079859 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:46:11.079965 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:46:11.080042 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:46:11.080114 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:46:11.095853 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:46:11.095951 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:46:11.104994 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:46:11.105485 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:46:11.105715 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:46:11.236975 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:46:11.237088 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:50:11.237231 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000344141s
	I1209 04:50:11.237256 1620518 kubeadm.go:319] 
	I1209 04:50:11.237309 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:50:11.237340 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:50:11.237438 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:50:11.237443 1620518 kubeadm.go:319] 
	I1209 04:50:11.237541 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:50:11.237571 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:50:11.237600 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:50:11.237603 1620518 kubeadm.go:319] 
	I1209 04:50:11.241458 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:50:11.241910 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:50:11.242023 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:50:11.242266 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:50:11.242272 1620518 kubeadm.go:319] 
	I1209 04:50:11.242336 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 04:50:11.242454 1620518 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000344141s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:50:11.242544 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:50:11.655787 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:50:11.668676 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:50:11.668730 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:50:11.676546 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:50:11.676562 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:50:11.676612 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:50:11.684172 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:50:11.684236 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:50:11.691594 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:50:11.699302 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:50:11.699363 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:50:11.706772 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.714846 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:50:11.714902 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.722267 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:50:11.730186 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:50:11.730250 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:50:11.738143 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:50:11.781074 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:50:11.781123 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:50:11.856141 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:50:11.856206 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:50:11.856240 1620518 kubeadm.go:319] OS: Linux
	I1209 04:50:11.856283 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:50:11.856330 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:50:11.856377 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:50:11.856424 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:50:11.856471 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:50:11.856522 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:50:11.856566 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:50:11.856614 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:50:11.856660 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:50:11.927746 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:50:11.927875 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:50:11.927971 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:50:11.934983 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:50:11.938507 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:50:11.938697 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:50:11.938772 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:50:11.938867 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:50:11.938937 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:50:11.939018 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:50:11.939071 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:50:11.939143 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:50:11.939213 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:50:11.939302 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:50:11.939383 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:50:11.939690 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:50:11.939748 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:50:12.353584 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:50:12.812738 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:50:13.265058 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:50:13.417250 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:50:13.472548 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:50:13.473076 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:50:13.475724 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:50:13.478920 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:50:13.479026 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:50:13.479104 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:50:13.479930 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:50:13.496348 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:50:13.496458 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:50:13.504378 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:50:13.504655 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:50:13.504696 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:50:13.630713 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:50:13.630826 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:54:13.630972 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000259173s
	I1209 04:54:13.630997 1620518 kubeadm.go:319] 
	I1209 04:54:13.631053 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:54:13.631086 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:54:13.631200 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:54:13.631206 1620518 kubeadm.go:319] 
	I1209 04:54:13.631310 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:54:13.631395 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:54:13.631461 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:54:13.631466 1620518 kubeadm.go:319] 
	I1209 04:54:13.635649 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:54:13.636127 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:54:13.636242 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:54:13.636479 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:54:13.636485 1620518 kubeadm.go:319] 
	I1209 04:54:13.636553 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:54:13.636616 1620518 kubeadm.go:403] duration metric: took 12m7.110467735s to StartCluster
	I1209 04:54:13.636648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:54:13.636715 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:54:13.662011 1620518 cri.go:89] found id: ""
	I1209 04:54:13.662024 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.662032 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:54:13.662037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:54:13.662094 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:54:13.688278 1620518 cri.go:89] found id: ""
	I1209 04:54:13.688293 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.688299 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:54:13.688304 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:54:13.688363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:54:13.714700 1620518 cri.go:89] found id: ""
	I1209 04:54:13.714715 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.714723 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:54:13.714729 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:54:13.714795 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:54:13.740152 1620518 cri.go:89] found id: ""
	I1209 04:54:13.740166 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.740173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:54:13.740178 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:54:13.740235 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:54:13.766214 1620518 cri.go:89] found id: ""
	I1209 04:54:13.766227 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.766235 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:54:13.766240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:54:13.766300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:54:13.793141 1620518 cri.go:89] found id: ""
	I1209 04:54:13.793155 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.793162 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:54:13.793168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:54:13.793225 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:54:13.824264 1620518 cri.go:89] found id: ""
	I1209 04:54:13.824278 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.824286 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:54:13.824294 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:54:13.824305 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:54:13.865509 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:54:13.865527 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:54:13.944055 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:54:13.944075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:54:13.960571 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:54:13.960593 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:54:14.028160 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:54:14.028170 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:54:14.028180 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 04:54:14.099915 1620518 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:54:14.099962 1620518 out.go:285] * 
	W1209 04:54:14.100108 1620518 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.100197 1620518 out.go:285] * 
	W1209 04:54:14.102317 1620518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:54:14.107888 1620518 out.go:203] 
	W1209 04:54:14.111655 1620518 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.111892 1620518 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:54:14.111932 1620518 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:54:14.116964 1620518 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927580587Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927620637Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927668178Z" level=info msg="Create NRI interface"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927758033Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927766493Z" level=info msg="runtime interface created"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927780007Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927786308Z" level=info msg="runtime interface starting up..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927792741Z" level=info msg="starting plugins..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927805771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927872323Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:42:04 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.942951614Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d42015e0-8a7e-47f7-95a2-398ea8aa48f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.943749037Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=554d2336-7df0-4ab3-87a2-3f0040c79a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944291229Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=70fb14c4-f971-4387-8e1b-10c98c4791aa name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944730675Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=36db540a-ff25-4b5c-b7d7-cd7322fbd4bb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945138629Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7427d70a-8db2-44c3-88f8-0607ec671ff6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945576229Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b63b04fd-62c4-4cf0-9b5b-23eef2eb12c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.946074564Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=287329f7-949c-4b5b-8433-0437004398fd name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:54:17.519885   21417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:17.520639   21417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:17.522252   21417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:17.522610   21417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:17.524170   21417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:54:17 up  9:36,  0 user,  load average: 0.05, 0.16, 0.43
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:54:15 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:15 functional-331811 kubelet[21283]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:15 functional-331811 kubelet[21283]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:15 functional-331811 kubelet[21283]: E1209 04:54:15.379856   21283 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:54:15 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:54:15 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:54:16 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 09 04:54:16 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:16 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:16 functional-331811 kubelet[21300]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:16 functional-331811 kubelet[21300]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:16 functional-331811 kubelet[21300]: E1209 04:54:16.138953   21300 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:54:16 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:54:16 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:54:16 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 09 04:54:16 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:16 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:16 functional-331811 kubelet[21334]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:16 functional-331811 kubelet[21334]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:54:16 functional-331811 kubelet[21334]: E1209 04:54:16.887956   21334 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:54:16 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:54:16 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:54:17 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 09 04:54:17 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:54:17 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (361.247566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-331811 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-331811 apply -f testdata/invalidsvc.yaml: exit status 1 (58.467245ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-331811 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-331811 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-331811 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-331811 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-331811 --alsologtostderr -v=1] stderr:
I1209 04:56:29.517738 1637916 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:29.517899 1637916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:29.517918 1637916 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:29.517933 1637916 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:29.518206 1637916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:29.518494 1637916 mustload.go:66] Loading cluster: functional-331811
I1209 04:56:29.519016 1637916 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:29.519532 1637916 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:29.537413 1637916 host.go:66] Checking if "functional-331811" exists ...
I1209 04:56:29.537723 1637916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:56:29.599820 1637916 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.590002183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:56:29.599954 1637916 api_server.go:166] Checking apiserver status ...
I1209 04:56:29.600024 1637916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 04:56:29.600069 1637916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:29.616707 1637916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
W1209 04:56:29.724121 1637916 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 04:56:29.727402 1637916 out.go:179] * The control-plane node functional-331811 apiserver is not running: (state=Stopped)
I1209 04:56:29.730253 1637916 out.go:179]   To start a cluster, run: "minikube start -p functional-331811"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (341.266936ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-331811 service hello-node --url                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001:/mount-9p --alsologtostderr -v=1              │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh -- ls -la /mount-9p                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh cat /mount-9p/test-1765256179538005159                                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh sudo umount -f /mount-9p                                                                                                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1191664544/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh -- ls -la /mount-9p                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh sudo umount -f /mount-9p                                                                                                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh findmnt -T /mount1                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount3 --alsologtostderr -v=1                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount1 --alsologtostderr -v=1                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount2 --alsologtostderr -v=1                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh findmnt -T /mount1                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh findmnt -T /mount2                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh findmnt -T /mount3                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ mount     │ -p functional-331811 --kill=true                                                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ start     │ -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ start     │ -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ start     │ -p functional-331811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-331811 --alsologtostderr -v=1                                                                                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:56:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:56:29.281449 1637842 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:56:29.281669 1637842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.281696 1637842 out.go:374] Setting ErrFile to fd 2...
	I1209 04:56:29.281715 1637842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.282029 1637842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:56:29.282505 1637842 out.go:368] Setting JSON to false
	I1209 04:56:29.283419 1637842 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":34730,"bootTime":1765221460,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:56:29.283520 1637842 start.go:143] virtualization:  
	I1209 04:56:29.286665 1637842 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:56:29.289651 1637842 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:56:29.289721 1637842 notify.go:221] Checking for updates...
	I1209 04:56:29.293455 1637842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:56:29.296286 1637842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:56:29.299012 1637842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:56:29.301803 1637842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:56:29.305170 1637842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:56:29.308437 1637842 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:56:29.309049 1637842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:56:29.342793 1637842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:56:29.342909 1637842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.397279 1637842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.387560385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.397384 1637842 docker.go:319] overlay module found
	I1209 04:56:29.400602 1637842 out.go:179] * Using the docker driver based on existing profile
	I1209 04:56:29.403610 1637842 start.go:309] selected driver: docker
	I1209 04:56:29.403639 1637842 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.403735 1637842 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:56:29.403853 1637842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.458136 1637842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.448851846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.458670 1637842 cni.go:84] Creating CNI manager for ""
	I1209 04:56:29.458760 1637842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:56:29.458801 1637842 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.463801 1637842 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927580587Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927620637Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927668178Z" level=info msg="Create NRI interface"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927758033Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927766493Z" level=info msg="runtime interface created"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927780007Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927786308Z" level=info msg="runtime interface starting up..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927792741Z" level=info msg="starting plugins..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927805771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927872323Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:42:04 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.942951614Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d42015e0-8a7e-47f7-95a2-398ea8aa48f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.943749037Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=554d2336-7df0-4ab3-87a2-3f0040c79a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944291229Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=70fb14c4-f971-4387-8e1b-10c98c4791aa name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944730675Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=36db540a-ff25-4b5c-b7d7-cd7322fbd4bb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945138629Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7427d70a-8db2-44c3-88f8-0607ec671ff6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945576229Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b63b04fd-62c4-4cf0-9b5b-23eef2eb12c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.946074564Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=287329f7-949c-4b5b-8433-0437004398fd name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:56:30.824768   23554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:30.825536   23554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:30.826545   23554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:30.827229   23554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:30.828785   23554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:56:30 up  9:38,  0 user,  load average: 0.90, 0.36, 0.45
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:56:28 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:29 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 09 04:56:29 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:29 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:29 functional-331811 kubelet[23436]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:29 functional-331811 kubelet[23436]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:29 functional-331811 kubelet[23436]: E1209 04:56:29.140093   23436 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:29 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:29 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:29 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 09 04:56:29 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:29 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:29 functional-331811 kubelet[23451]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:29 functional-331811 kubelet[23451]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:29 functional-331811 kubelet[23451]: E1209 04:56:29.881858   23451 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:29 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:29 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:30 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 09 04:56:30 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:30 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:30 functional-331811 kubelet[23500]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:30 functional-331811 kubelet[23500]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:30 functional-331811 kubelet[23500]: E1209 04:56:30.638705   23500 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:30 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:30 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (317.080506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 status: exit status 2 (315.24428ms)

                                                
                                                
-- stdout --
	functional-331811
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-331811 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (335.395503ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-331811 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 status -o json: exit status 2 (309.027334ms)

                                                
                                                
-- stdout --
	{"Name":"functional-331811","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-331811 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (328.433271ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-331811 service list                                                                                                                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ service │ functional-331811 service list -o json                                                                                                              │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ service │ functional-331811 service --namespace=default --https --url hello-node                                                                              │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ service │ functional-331811 service hello-node --url --format={{.IP}}                                                                                         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ service │ functional-331811 service hello-node --url                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount   │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001:/mount-9p --alsologtostderr -v=1              │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh     │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh     │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh -- ls -la /mount-9p                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh cat /mount-9p/test-1765256179538005159                                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh     │ functional-331811 ssh sudo umount -f /mount-9p                                                                                                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount   │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1191664544/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh     │ functional-331811 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh -- ls -la /mount-9p                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh sudo umount -f /mount-9p                                                                                                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh     │ functional-331811 ssh findmnt -T /mount1                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount   │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount3 --alsologtostderr -v=1                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount   │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount1 --alsologtostderr -v=1                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount   │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount2 --alsologtostderr -v=1                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh     │ functional-331811 ssh findmnt -T /mount1                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh findmnt -T /mount2                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh     │ functional-331811 ssh findmnt -T /mount3                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ mount   │ -p functional-331811 --kill=true                                                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:42:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:42:01.637786 1620518 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:42:01.637909 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.637913 1620518 out.go:374] Setting ErrFile to fd 2...
	I1209 04:42:01.637918 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.638166 1620518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:42:01.638522 1620518 out.go:368] Setting JSON to false
	I1209 04:42:01.639450 1620518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33862,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:42:01.639510 1620518 start.go:143] virtualization:  
	I1209 04:42:01.642955 1620518 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:42:01.646014 1620518 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:42:01.646101 1620518 notify.go:221] Checking for updates...
	I1209 04:42:01.651837 1620518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:42:01.654857 1620518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:42:01.657670 1620518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:42:01.660510 1620518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:42:01.663383 1620518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:42:01.666731 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:01.666828 1620518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:42:01.689070 1620518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:42:01.689175 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.744025 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.734708732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.744121 1620518 docker.go:319] overlay module found
	I1209 04:42:01.749121 1620518 out.go:179] * Using the docker driver based on existing profile
	I1209 04:42:01.751932 1620518 start.go:309] selected driver: docker
	I1209 04:42:01.751941 1620518 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.752051 1620518 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:42:01.752158 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.824076 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.81179321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.824456 1620518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:42:01.824480 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:01.824537 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:01.824578 1620518 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.827700 1620518 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:42:01.830624 1620518 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:42:01.833519 1620518 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:42:01.836178 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:01.836217 1620518 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:42:01.836228 1620518 cache.go:65] Caching tarball of preloaded images
	I1209 04:42:01.836255 1620518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:42:01.836324 1620518 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:42:01.836333 1620518 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:42:01.836451 1620518 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:42:01.855430 1620518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:42:01.855441 1620518 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:42:01.855455 1620518 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:42:01.855485 1620518 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:42:01.855543 1620518 start.go:364] duration metric: took 40.87µs to acquireMachinesLock for "functional-331811"
	I1209 04:42:01.855566 1620518 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:42:01.855570 1620518 fix.go:54] fixHost starting: 
	I1209 04:42:01.855819 1620518 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:42:01.873325 1620518 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:42:01.873351 1620518 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:42:01.876665 1620518 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:42:01.876693 1620518 machine.go:94] provisionDockerMachine start ...
	I1209 04:42:01.876797 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:01.894796 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:01.895121 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:01.895129 1620518 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:42:02.058680 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.058696 1620518 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:42:02.058761 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.090920 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.091365 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.091379 1620518 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:42:02.262883 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.262960 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.281315 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.281623 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.281637 1620518 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:42:02.435135 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:42:02.435152 1620518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:42:02.435179 1620518 ubuntu.go:190] setting up certificates
	I1209 04:42:02.435197 1620518 provision.go:84] configureAuth start
	I1209 04:42:02.435267 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:02.452748 1620518 provision.go:143] copyHostCerts
	I1209 04:42:02.452806 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:42:02.452813 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:42:02.452891 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:42:02.452996 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:42:02.453000 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:42:02.453027 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:42:02.453088 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:42:02.453092 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:42:02.453121 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:42:02.453207 1620518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:42:02.729112 1620518 provision.go:177] copyRemoteCerts
	I1209 04:42:02.729174 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:42:02.729226 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.747750 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:02.856241 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:42:02.877475 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:42:02.898967 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:42:02.917189 1620518 provision.go:87] duration metric: took 481.970064ms to configureAuth
	I1209 04:42:02.917207 1620518 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:42:02.917407 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:02.917510 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.935642 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.935957 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.935968 1620518 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:42:03.293502 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:42:03.293517 1620518 machine.go:97] duration metric: took 1.416817164s to provisionDockerMachine
	I1209 04:42:03.293527 1620518 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:42:03.293537 1620518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:42:03.293597 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:42:03.293653 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.312696 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.419010 1620518 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:42:03.422897 1620518 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:42:03.422917 1620518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:42:03.422927 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:42:03.422995 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:42:03.423075 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:42:03.423167 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:42:03.423212 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:42:03.431449 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:03.450423 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:42:03.470159 1620518 start.go:296] duration metric: took 176.617533ms for postStartSetup
	I1209 04:42:03.470235 1620518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:42:03.470292 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.488346 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.593519 1620518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:42:03.598841 1620518 fix.go:56] duration metric: took 1.743264094s for fixHost
	I1209 04:42:03.598859 1620518 start.go:83] releasing machines lock for "functional-331811", held for 1.743308418s
	I1209 04:42:03.598929 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:03.617266 1620518 ssh_runner.go:195] Run: cat /version.json
	I1209 04:42:03.617315 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.617558 1620518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:42:03.617603 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.646611 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.653495 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.852499 1620518 ssh_runner.go:195] Run: systemctl --version
	I1209 04:42:03.859513 1620518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:42:03.897674 1620518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:42:03.902590 1620518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:42:03.902664 1620518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:42:03.911194 1620518 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:42:03.911208 1620518 start.go:496] detecting cgroup driver to use...
	I1209 04:42:03.911240 1620518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:42:03.911304 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:42:03.926479 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:42:03.940314 1620518 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:42:03.940374 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:42:03.956989 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:42:03.970857 1620518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:42:04.105722 1620518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:42:04.221024 1620518 docker.go:234] disabling docker service ...
	I1209 04:42:04.221082 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:42:04.236606 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:42:04.259126 1620518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:42:04.406348 1620518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:42:04.537870 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:42:04.550770 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:42:04.565609 1620518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:42:04.565666 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.574449 1620518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:42:04.574512 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.583819 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.592696 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.601828 1620518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:42:04.610342 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.619401 1620518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.628176 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.637069 1620518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:42:04.644806 1620518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:42:04.652309 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:04.767112 1620518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:42:04.935446 1620518 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:42:04.935507 1620518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:42:04.939304 1620518 start.go:564] Will wait 60s for crictl version
	I1209 04:42:04.939369 1620518 ssh_runner.go:195] Run: which crictl
	I1209 04:42:04.942772 1620518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:42:04.967172 1620518 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:42:04.967246 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.000450 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.039508 1620518 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:42:05.042351 1620518 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:42:05.058209 1620518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:42:05.065398 1620518 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:42:05.068071 1620518 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:42:05.068222 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:05.068288 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.125308 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.125320 1620518 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:42:05.125384 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.156125 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.156137 1620518 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:42:05.156143 1620518 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:42:05.156245 1620518 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:42:05.156329 1620518 ssh_runner.go:195] Run: crio config
	I1209 04:42:05.230295 1620518 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:42:05.230327 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:05.230335 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:05.230348 1620518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:42:05.230371 1620518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:42:05.230520 1620518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:42:05.230600 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:42:05.238799 1620518 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:42:05.238882 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:42:05.246819 1620518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:42:05.260010 1620518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:42:05.273192 1620518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1209 04:42:05.287174 1620518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:42:05.291010 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:05.412581 1620518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:42:05.825078 1620518 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:42:05.825089 1620518 certs.go:195] generating shared ca certs ...
	I1209 04:42:05.825104 1620518 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:42:05.825273 1620518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:42:05.825311 1620518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:42:05.825317 1620518 certs.go:257] generating profile certs ...
	I1209 04:42:05.825400 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:42:05.825453 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:42:05.825489 1620518 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:42:05.825606 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:42:05.825637 1620518 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:42:05.825643 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:42:05.825670 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:42:05.825692 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:42:05.825717 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:42:05.825764 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:05.826339 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:42:05.847398 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:42:05.867264 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:42:05.887896 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:42:05.907076 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:42:05.926224 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:42:05.944236 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:42:05.962834 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:42:05.981333 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:42:06.001204 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:42:06.024226 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:42:06.044638 1620518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:42:06.059443 1620518 ssh_runner.go:195] Run: openssl version
	I1209 04:42:06.066215 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.074237 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:42:06.083015 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087232 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087310 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.129553 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:42:06.137400 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.144988 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:42:06.152871 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156811 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156876 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.198268 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:42:06.205673 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.212766 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:42:06.220239 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.223985 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.224039 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.265241 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:42:06.272666 1620518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:42:06.276249 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:42:06.318459 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:42:06.361504 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:42:06.402819 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:42:06.443793 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:42:06.485065 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:42:06.526159 1620518 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:06.526240 1620518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:42:06.526302 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.557743 1620518 cri.go:89] found id: ""
	I1209 04:42:06.557806 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:42:06.565919 1620518 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:42:06.565929 1620518 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:42:06.565979 1620518 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:42:06.574421 1620518 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.574975 1620518 kubeconfig.go:125] found "functional-331811" server: "https://192.168.49.2:8441"
	I1209 04:42:06.576238 1620518 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:42:06.585800 1620518 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:27:27.994828232 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:42:05.282481991 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:42:06.585820 1620518 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:42:06.585830 1620518 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 04:42:06.585887 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.615364 1620518 cri.go:89] found id: ""
	I1209 04:42:06.615424 1620518 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:42:06.632416 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:42:06.640276 1620518 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  9 04:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  9 04:31 /etc/kubernetes/scheduler.conf
	
	I1209 04:42:06.640334 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:42:06.648234 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:42:06.655526 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.655581 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:42:06.663036 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.670853 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.670911 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.678990 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:42:06.687863 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.687915 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:42:06.696417 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:42:06.705368 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:06.756797 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.115058 1620518 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.358236541s)
	I1209 04:42:08.115116 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.320381 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.380846 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.425206 1620518 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:42:08.425277 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:08.925770 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.425673 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.926006 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.426138 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.926333 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.426044 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.925865 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.426407 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.925704 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.425999 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.926113 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.426341 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.926036 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.425471 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.926251 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.426322 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.925477 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.426300 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.926252 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.426140 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.925451 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.426343 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.925709 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.426256 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.925497 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.425570 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.926150 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.425937 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.926432 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.425437 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.926221 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.425823 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.926268 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.426017 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.926031 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.425377 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.925360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.425992 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.925571 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.425482 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.926361 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.426063 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.926242 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.425494 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.926061 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.425707 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.925370 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.426205 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.926119 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.426163 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.925480 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.425584 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.926360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.426207 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.926064 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.426077 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.925371 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.426110 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.425443 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.926209 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.426345 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.925457 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.426372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.926174 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.426131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.926382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.426266 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.926376 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.425722 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.925468 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.425612 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.925853 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.425892 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.925441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.425589 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.926038 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.425591 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.926409 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.426312 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.925878 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.425458 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.925689 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.426143 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.926139 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.426335 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.926396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.425396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.925485 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.425608 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.925545 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.425421 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.925703 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.925392 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.426241 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.925364 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.425372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.425848 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.925784 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.425624 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.425417 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.926188 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.426323 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.925858 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.425747 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.926082 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.925448 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.425655 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.925700 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.926215 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.425795 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.925648 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:08.425431 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:08.425513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:08.451611 1620518 cri.go:89] found id: ""
	I1209 04:43:08.451625 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.451634 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:08.451644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:08.451703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:08.478028 1620518 cri.go:89] found id: ""
	I1209 04:43:08.478042 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.478049 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:08.478054 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:08.478116 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:08.504952 1620518 cri.go:89] found id: ""
	I1209 04:43:08.504967 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.504974 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:08.504980 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:08.505037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:08.531444 1620518 cri.go:89] found id: ""
	I1209 04:43:08.531460 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.531468 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:08.531473 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:08.531558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:08.557796 1620518 cri.go:89] found id: ""
	I1209 04:43:08.557810 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.557817 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:08.557822 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:08.557878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:08.589421 1620518 cri.go:89] found id: ""
	I1209 04:43:08.589436 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.589443 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:08.589448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:08.589505 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:08.626762 1620518 cri.go:89] found id: ""
	I1209 04:43:08.626776 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.626783 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:08.626792 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:08.626802 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:08.694456 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:08.694477 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:08.709310 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:08.709333 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:08.773551 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:08.773573 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:08.773584 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:08.840868 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:08.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.374296 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:11.384818 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:11.384880 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:11.413700 1620518 cri.go:89] found id: ""
	I1209 04:43:11.413713 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.413720 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:11.413725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:11.413783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:11.439148 1620518 cri.go:89] found id: ""
	I1209 04:43:11.439163 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.439170 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:11.439175 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:11.439236 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:11.468833 1620518 cri.go:89] found id: ""
	I1209 04:43:11.468847 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.468854 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:11.468859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:11.468917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:11.501328 1620518 cri.go:89] found id: ""
	I1209 04:43:11.501343 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.501350 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:11.501355 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:11.501420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:11.527673 1620518 cri.go:89] found id: ""
	I1209 04:43:11.527687 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.527695 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:11.527700 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:11.527757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:11.552531 1620518 cri.go:89] found id: ""
	I1209 04:43:11.552545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.552552 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:11.552557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:11.552618 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:11.591493 1620518 cri.go:89] found id: ""
	I1209 04:43:11.591507 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.591514 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:11.591522 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:11.591538 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.626001 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:11.626017 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:11.699914 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:11.699939 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:11.715894 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:11.715917 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:11.780735 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:11.780754 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:11.780765 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.352369 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:14.362558 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:14.362633 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:14.388407 1620518 cri.go:89] found id: ""
	I1209 04:43:14.388421 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.388428 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:14.388433 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:14.388490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:14.415937 1620518 cri.go:89] found id: ""
	I1209 04:43:14.415952 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.415960 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:14.415965 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:14.416029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:14.445418 1620518 cri.go:89] found id: ""
	I1209 04:43:14.445433 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.445440 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:14.445445 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:14.445513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:14.471362 1620518 cri.go:89] found id: ""
	I1209 04:43:14.471376 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.471383 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:14.471388 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:14.471452 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:14.503134 1620518 cri.go:89] found id: ""
	I1209 04:43:14.503148 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.503155 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:14.503160 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:14.503219 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:14.529790 1620518 cri.go:89] found id: ""
	I1209 04:43:14.529803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.529811 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:14.529816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:14.529889 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:14.555803 1620518 cri.go:89] found id: ""
	I1209 04:43:14.555817 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.555824 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:14.555832 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:14.555843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:14.632593 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:14.632611 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:14.648671 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:14.648687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:14.713371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:14.713382 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:14.713400 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.783824 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:14.783843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.318936 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:17.329339 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:17.329407 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:17.356311 1620518 cri.go:89] found id: ""
	I1209 04:43:17.356330 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.356351 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:17.356356 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:17.356416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:17.386438 1620518 cri.go:89] found id: ""
	I1209 04:43:17.386452 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.386460 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:17.386465 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:17.386528 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:17.411209 1620518 cri.go:89] found id: ""
	I1209 04:43:17.411222 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.411229 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:17.411234 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:17.411291 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:17.437189 1620518 cri.go:89] found id: ""
	I1209 04:43:17.437201 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.437208 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:17.437229 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:17.437286 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:17.463836 1620518 cri.go:89] found id: ""
	I1209 04:43:17.463850 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.463857 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:17.463862 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:17.463945 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:17.490604 1620518 cri.go:89] found id: ""
	I1209 04:43:17.490617 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.490625 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:17.490630 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:17.490691 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:17.517583 1620518 cri.go:89] found id: ""
	I1209 04:43:17.517597 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.517605 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:17.517612 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:17.517623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:17.532622 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:17.532638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:17.611464 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:17.611477 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:17.611487 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:17.693672 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:17.693692 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.723232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:17.723249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.294145 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:20.304681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:20.304742 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:20.333282 1620518 cri.go:89] found id: ""
	I1209 04:43:20.333297 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.333304 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:20.333309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:20.333367 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:20.363210 1620518 cri.go:89] found id: ""
	I1209 04:43:20.363224 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.363231 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:20.363236 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:20.363300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:20.387964 1620518 cri.go:89] found id: ""
	I1209 04:43:20.387978 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.387985 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:20.387995 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:20.388054 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:20.414851 1620518 cri.go:89] found id: ""
	I1209 04:43:20.414864 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.414871 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:20.414876 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:20.414943 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:20.441500 1620518 cri.go:89] found id: ""
	I1209 04:43:20.441514 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.441521 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:20.441526 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:20.441584 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:20.468302 1620518 cri.go:89] found id: ""
	I1209 04:43:20.468318 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.468325 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:20.468331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:20.468393 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:20.497314 1620518 cri.go:89] found id: ""
	I1209 04:43:20.497328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.497345 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:20.497354 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:20.497364 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.570464 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:20.570492 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:20.586642 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:20.586660 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:20.665367 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:20.665378 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:20.665389 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:20.733648 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:20.733669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.265697 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:23.275834 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:23.275893 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:23.304587 1620518 cri.go:89] found id: ""
	I1209 04:43:23.304613 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.304620 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:23.304626 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:23.304692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:23.329381 1620518 cri.go:89] found id: ""
	I1209 04:43:23.329406 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.329414 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:23.329419 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:23.329485 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:23.355201 1620518 cri.go:89] found id: ""
	I1209 04:43:23.355215 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.355222 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:23.355227 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:23.355289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:23.380238 1620518 cri.go:89] found id: ""
	I1209 04:43:23.380251 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.380258 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:23.380263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:23.380322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:23.409750 1620518 cri.go:89] found id: ""
	I1209 04:43:23.409764 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.409771 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:23.409776 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:23.409838 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:23.437575 1620518 cri.go:89] found id: ""
	I1209 04:43:23.437588 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.437595 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:23.437600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:23.437657 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:23.464403 1620518 cri.go:89] found id: ""
	I1209 04:43:23.464418 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.464425 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:23.464432 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:23.464444 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:23.479567 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:23.479583 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:23.543433 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:23.543443 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:23.543454 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:23.620689 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:23.620709 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.660232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:23.660249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.230943 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:26.242046 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:26.242107 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:26.269716 1620518 cri.go:89] found id: ""
	I1209 04:43:26.269729 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.269736 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:26.269741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:26.269798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:26.296756 1620518 cri.go:89] found id: ""
	I1209 04:43:26.296771 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.296778 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:26.296783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:26.296844 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:26.325789 1620518 cri.go:89] found id: ""
	I1209 04:43:26.325803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.325810 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:26.325816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:26.325878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:26.362024 1620518 cri.go:89] found id: ""
	I1209 04:43:26.362037 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.362044 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:26.362049 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:26.362105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:26.389037 1620518 cri.go:89] found id: ""
	I1209 04:43:26.389051 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.389058 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:26.389063 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:26.389123 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:26.416773 1620518 cri.go:89] found id: ""
	I1209 04:43:26.416787 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.416794 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:26.416799 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:26.416854 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:26.442294 1620518 cri.go:89] found id: ""
	I1209 04:43:26.442308 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.442315 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:26.442323 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:26.442334 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.508604 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:26.508623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:26.523993 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:26.524013 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:26.599795 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:26.599816 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:26.599829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:26.676981 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:26.677003 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:29.206372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:29.216486 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:29.216547 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:29.241737 1620518 cri.go:89] found id: ""
	I1209 04:43:29.241752 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.241759 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:29.241764 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:29.241819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:29.275909 1620518 cri.go:89] found id: ""
	I1209 04:43:29.275922 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.275929 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:29.275935 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:29.275993 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:29.300470 1620518 cri.go:89] found id: ""
	I1209 04:43:29.300483 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.300490 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:29.300495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:29.300552 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:29.326081 1620518 cri.go:89] found id: ""
	I1209 04:43:29.326094 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.326101 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:29.326106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:29.326166 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:29.353323 1620518 cri.go:89] found id: ""
	I1209 04:43:29.353337 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.353344 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:29.353349 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:29.353414 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:29.378490 1620518 cri.go:89] found id: ""
	I1209 04:43:29.378505 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.378512 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:29.378517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:29.378599 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:29.404558 1620518 cri.go:89] found id: ""
	I1209 04:43:29.404571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.404578 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:29.404585 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:29.404595 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:29.470257 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:29.470277 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:29.485347 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:29.485368 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:29.550659 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:29.550676 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:29.550687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:29.628618 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:29.628639 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:32.159988 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:32.170169 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:32.170227 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:32.195475 1620518 cri.go:89] found id: ""
	I1209 04:43:32.195489 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.195496 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:32.195502 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:32.195558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:32.221067 1620518 cri.go:89] found id: ""
	I1209 04:43:32.221080 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.221088 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:32.221093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:32.221160 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:32.247302 1620518 cri.go:89] found id: ""
	I1209 04:43:32.247315 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.247322 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:32.247327 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:32.247388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:32.273214 1620518 cri.go:89] found id: ""
	I1209 04:43:32.273227 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.273234 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:32.273239 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:32.273296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:32.301827 1620518 cri.go:89] found id: ""
	I1209 04:43:32.301842 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.301849 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:32.301855 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:32.301920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:32.327504 1620518 cri.go:89] found id: ""
	I1209 04:43:32.327518 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.327526 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:32.327531 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:32.327592 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:32.354211 1620518 cri.go:89] found id: ""
	I1209 04:43:32.354225 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.354232 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:32.354240 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:32.354251 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:32.424906 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:32.424926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:32.440380 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:32.440396 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:32.508486 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:32.508496 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:32.508506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:32.577521 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:32.577541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:35.111262 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:35.121574 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:35.121636 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:35.147108 1620518 cri.go:89] found id: ""
	I1209 04:43:35.147121 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.147128 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:35.147134 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:35.147193 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:35.172557 1620518 cri.go:89] found id: ""
	I1209 04:43:35.172571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.172578 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:35.172583 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:35.172644 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:35.200994 1620518 cri.go:89] found id: ""
	I1209 04:43:35.201007 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.201020 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:35.201025 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:35.201082 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:35.230443 1620518 cri.go:89] found id: ""
	I1209 04:43:35.230457 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.230470 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:35.230476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:35.230536 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:35.255703 1620518 cri.go:89] found id: ""
	I1209 04:43:35.255716 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.255723 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:35.255728 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:35.255786 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:35.281749 1620518 cri.go:89] found id: ""
	I1209 04:43:35.281762 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.281780 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:35.281786 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:35.281852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:35.306677 1620518 cri.go:89] found id: ""
	I1209 04:43:35.306690 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.306697 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:35.306705 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:35.306715 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:35.375938 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:35.375957 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:35.390955 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:35.390984 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:35.457222 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:35.457240 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:35.457252 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:35.526131 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:35.526150 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:38.057096 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:38.068039 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:38.068101 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:38.097645 1620518 cri.go:89] found id: ""
	I1209 04:43:38.097659 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.097666 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:38.097672 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:38.097730 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:38.125024 1620518 cri.go:89] found id: ""
	I1209 04:43:38.125038 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.125045 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:38.125051 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:38.125106 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:38.158551 1620518 cri.go:89] found id: ""
	I1209 04:43:38.158565 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.158597 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:38.158602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:38.158667 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:38.185732 1620518 cri.go:89] found id: ""
	I1209 04:43:38.185746 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.185753 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:38.185758 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:38.185817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:38.211917 1620518 cri.go:89] found id: ""
	I1209 04:43:38.211931 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.211938 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:38.211944 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:38.212003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:38.242391 1620518 cri.go:89] found id: ""
	I1209 04:43:38.242407 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.242414 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:38.242420 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:38.242495 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:38.268565 1620518 cri.go:89] found id: ""
	I1209 04:43:38.268598 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.268606 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:38.268616 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:38.268628 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:38.335336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:38.335355 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:38.350651 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:38.350667 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:38.413931 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:38.413941 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:38.413952 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:38.481874 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:38.481894 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.013724 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:41.024462 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:41.024521 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:41.050950 1620518 cri.go:89] found id: ""
	I1209 04:43:41.050965 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.050973 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:41.050979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:41.051050 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:41.080781 1620518 cri.go:89] found id: ""
	I1209 04:43:41.080794 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.080801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:41.080806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:41.080864 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:41.107039 1620518 cri.go:89] found id: ""
	I1209 04:43:41.107053 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.107059 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:41.107064 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:41.107122 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:41.131302 1620518 cri.go:89] found id: ""
	I1209 04:43:41.131316 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.131323 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:41.131328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:41.131387 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:41.160541 1620518 cri.go:89] found id: ""
	I1209 04:43:41.160554 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.160560 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:41.160566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:41.160623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:41.189715 1620518 cri.go:89] found id: ""
	I1209 04:43:41.189728 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.189735 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:41.189741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:41.189798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:41.215532 1620518 cri.go:89] found id: ""
	I1209 04:43:41.215545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.215552 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:41.215559 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:41.215570 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.248230 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:41.248245 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:41.316564 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:41.316589 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:41.332031 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:41.332048 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:41.399707 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:41.399720 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:41.399733 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:43.973310 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:43.983577 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:43.983641 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:44.019270 1620518 cri.go:89] found id: ""
	I1209 04:43:44.019285 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.019292 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:44.019298 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:44.019362 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:44.046326 1620518 cri.go:89] found id: ""
	I1209 04:43:44.046340 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.046347 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:44.046353 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:44.046416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:44.073718 1620518 cri.go:89] found id: ""
	I1209 04:43:44.073732 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.073739 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:44.073745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:44.073806 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:44.099804 1620518 cri.go:89] found id: ""
	I1209 04:43:44.099818 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.099825 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:44.099830 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:44.099888 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:44.125332 1620518 cri.go:89] found id: ""
	I1209 04:43:44.125346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.125353 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:44.125358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:44.125418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:44.153398 1620518 cri.go:89] found id: ""
	I1209 04:43:44.153413 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.153420 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:44.153438 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:44.153501 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:44.181868 1620518 cri.go:89] found id: ""
	I1209 04:43:44.181882 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.181889 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:44.181909 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:44.181919 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:44.197827 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:44.197843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:44.262818 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:44.262829 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:44.262840 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:44.331403 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:44.331423 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:44.363934 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:44.363951 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:46.935826 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:46.946383 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:46.946442 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:46.972025 1620518 cri.go:89] found id: ""
	I1209 04:43:46.972039 1620518 logs.go:282] 0 containers: []
	W1209 04:43:46.972046 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:46.972052 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:46.972114 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:47.005389 1620518 cri.go:89] found id: ""
	I1209 04:43:47.005411 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.005428 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:47.005434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:47.005503 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:47.034137 1620518 cri.go:89] found id: ""
	I1209 04:43:47.034151 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.034159 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:47.034164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:47.034224 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:47.060061 1620518 cri.go:89] found id: ""
	I1209 04:43:47.060074 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.060081 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:47.060086 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:47.060155 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:47.087325 1620518 cri.go:89] found id: ""
	I1209 04:43:47.087339 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.087346 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:47.087351 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:47.087412 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:47.113243 1620518 cri.go:89] found id: ""
	I1209 04:43:47.113257 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.113265 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:47.113271 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:47.113333 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:47.139697 1620518 cri.go:89] found id: ""
	I1209 04:43:47.139710 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.139718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:47.139725 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:47.139735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:47.208645 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:47.208665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:47.224099 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:47.224118 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:47.291121 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:47.291131 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:47.291143 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:47.360007 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:47.360028 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:49.894321 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:49.904751 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:49.904813 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:49.933138 1620518 cri.go:89] found id: ""
	I1209 04:43:49.933152 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.933160 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:49.933165 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:49.933223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:49.959143 1620518 cri.go:89] found id: ""
	I1209 04:43:49.959156 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.959163 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:49.959174 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:49.959231 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:49.984103 1620518 cri.go:89] found id: ""
	I1209 04:43:49.984118 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.984125 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:49.984130 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:49.984188 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:50.019299 1620518 cri.go:89] found id: ""
	I1209 04:43:50.019314 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.019322 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:50.019328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:50.019394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:50.050759 1620518 cri.go:89] found id: ""
	I1209 04:43:50.050773 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.050780 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:50.050785 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:50.050852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:50.077915 1620518 cri.go:89] found id: ""
	I1209 04:43:50.077929 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.077937 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:50.077942 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:50.078003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:50.105340 1620518 cri.go:89] found id: ""
	I1209 04:43:50.105354 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.105361 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:50.105369 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:50.105382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:50.176940 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:50.176950 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:50.176961 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:50.250014 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:50.250035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:50.279274 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:50.279290 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:50.344336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:50.344354 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:52.861162 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:52.873255 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:52.873331 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:52.901728 1620518 cri.go:89] found id: ""
	I1209 04:43:52.901743 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.901750 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:52.901756 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:52.901847 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:52.927167 1620518 cri.go:89] found id: ""
	I1209 04:43:52.927180 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.927187 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:52.927192 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:52.927252 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:52.953243 1620518 cri.go:89] found id: ""
	I1209 04:43:52.953256 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.953263 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:52.953268 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:52.953326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:52.981127 1620518 cri.go:89] found id: ""
	I1209 04:43:52.981140 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.981147 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:52.981152 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:52.981210 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:53.014584 1620518 cri.go:89] found id: ""
	I1209 04:43:53.014600 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.014608 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:53.014613 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:53.014681 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:53.041932 1620518 cri.go:89] found id: ""
	I1209 04:43:53.041946 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.041954 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:53.041960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:53.042027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:53.068705 1620518 cri.go:89] found id: ""
	I1209 04:43:53.068719 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.068725 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:53.068733 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:53.068749 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:53.097490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:53.097506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:53.162858 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:53.162879 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:53.177170 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:53.177185 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:53.240297 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:53.240307 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:53.240320 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:55.810542 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:55.820923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:55.820985 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:55.860409 1620518 cri.go:89] found id: ""
	I1209 04:43:55.860422 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.860429 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:55.860434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:55.860491 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:55.895639 1620518 cri.go:89] found id: ""
	I1209 04:43:55.895653 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.895660 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:55.895665 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:55.895729 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:55.922274 1620518 cri.go:89] found id: ""
	I1209 04:43:55.922289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.922297 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:55.922302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:55.922366 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:55.948415 1620518 cri.go:89] found id: ""
	I1209 04:43:55.948437 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.948444 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:55.948448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:55.948509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:55.977442 1620518 cri.go:89] found id: ""
	I1209 04:43:55.977456 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.977463 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:55.977468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:55.977525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:56.006812 1620518 cri.go:89] found id: ""
	I1209 04:43:56.006827 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.006835 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:56.006841 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:56.006920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:56.035113 1620518 cri.go:89] found id: ""
	I1209 04:43:56.035128 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.035135 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:56.035143 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:56.035161 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:56.108405 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:56.108424 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:56.108435 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:56.178263 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:56.178284 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:56.211498 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:56.211513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:56.278845 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:56.278867 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:58.794283 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:58.804745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:58.804805 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:58.836463 1620518 cri.go:89] found id: ""
	I1209 04:43:58.836482 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.836489 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:58.836494 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:58.836551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:58.871008 1620518 cri.go:89] found id: ""
	I1209 04:43:58.871021 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.871028 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:58.871033 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:58.871096 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:58.904275 1620518 cri.go:89] found id: ""
	I1209 04:43:58.904289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.904296 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:58.904301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:58.904363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:58.934333 1620518 cri.go:89] found id: ""
	I1209 04:43:58.934346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.934353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:58.934361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:58.934418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:58.961476 1620518 cri.go:89] found id: ""
	I1209 04:43:58.961490 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.961497 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:58.961503 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:58.961562 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:58.987249 1620518 cri.go:89] found id: ""
	I1209 04:43:58.987263 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.987270 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:58.987276 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:58.987335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:59.015314 1620518 cri.go:89] found id: ""
	I1209 04:43:59.015328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:59.015335 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:59.015342 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:59.015353 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:59.079415 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:59.079425 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:59.079436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:59.150742 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:59.150761 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:59.180649 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:59.180665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:59.248002 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:59.248020 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:01.763804 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:01.774240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:01.774302 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:01.802786 1620518 cri.go:89] found id: ""
	I1209 04:44:01.802800 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.802808 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:01.802813 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:01.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:01.842779 1620518 cri.go:89] found id: ""
	I1209 04:44:01.842794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.842801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:01.842806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:01.842867 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:01.874062 1620518 cri.go:89] found id: ""
	I1209 04:44:01.874081 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.874088 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:01.874093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:01.874157 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:01.903692 1620518 cri.go:89] found id: ""
	I1209 04:44:01.903706 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.903713 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:01.903718 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:01.903777 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:01.933430 1620518 cri.go:89] found id: ""
	I1209 04:44:01.933444 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.933451 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:01.933456 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:01.933515 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:01.961286 1620518 cri.go:89] found id: ""
	I1209 04:44:01.961300 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.961307 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:01.961313 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:01.961373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:01.990521 1620518 cri.go:89] found id: ""
	I1209 04:44:01.990535 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.990542 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:01.990550 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:01.990561 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:02.008959 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:02.008977 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:02.076349 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:02.076359 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:02.076370 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:02.144940 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:02.144960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:02.175776 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:02.175793 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:04.751592 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:04.762232 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:04.762298 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:04.788096 1620518 cri.go:89] found id: ""
	I1209 04:44:04.788110 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.788117 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:04.788122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:04.788184 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:04.829955 1620518 cri.go:89] found id: ""
	I1209 04:44:04.829969 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.829975 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:04.829981 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:04.830037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:04.869304 1620518 cri.go:89] found id: ""
	I1209 04:44:04.869318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.869325 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:04.869330 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:04.869389 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:04.900033 1620518 cri.go:89] found id: ""
	I1209 04:44:04.900048 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.900054 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:04.900060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:04.900118 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:04.926358 1620518 cri.go:89] found id: ""
	I1209 04:44:04.926373 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.926381 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:04.926386 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:04.926446 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:04.952219 1620518 cri.go:89] found id: ""
	I1209 04:44:04.952233 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.952240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:04.952245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:04.952318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:04.981606 1620518 cri.go:89] found id: ""
	I1209 04:44:04.981633 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.981640 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:04.981648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:04.981659 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:05.054363 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:05.054374 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:05.054384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:05.123486 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:05.123508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:05.153591 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:05.153609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:05.220156 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:05.220176 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:07.735728 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:07.746784 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:07.746849 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:07.773633 1620518 cri.go:89] found id: ""
	I1209 04:44:07.773646 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.773653 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:07.773658 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:07.773714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:07.799209 1620518 cri.go:89] found id: ""
	I1209 04:44:07.799222 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.799230 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:07.799235 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:07.799289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:07.833034 1620518 cri.go:89] found id: ""
	I1209 04:44:07.833047 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.833055 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:07.833060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:07.833117 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:07.861960 1620518 cri.go:89] found id: ""
	I1209 04:44:07.861979 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.861986 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:07.861991 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:07.862048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:07.891370 1620518 cri.go:89] found id: ""
	I1209 04:44:07.891384 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.891392 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:07.891398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:07.891499 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:07.925093 1620518 cri.go:89] found id: ""
	I1209 04:44:07.925106 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.925113 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:07.925119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:07.925179 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:07.953814 1620518 cri.go:89] found id: ""
	I1209 04:44:07.953828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.953845 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:07.953853 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:07.953863 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:08.019480 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:08.019500 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:08.035405 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:08.035420 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:08.103942 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:08.103951 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:08.103964 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:08.173425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:08.173447 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:10.707757 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:10.717859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:10.717922 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:10.743691 1620518 cri.go:89] found id: ""
	I1209 04:44:10.743705 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.743712 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:10.743717 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:10.743775 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:10.769622 1620518 cri.go:89] found id: ""
	I1209 04:44:10.769636 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.769643 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:10.769648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:10.769707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:10.802785 1620518 cri.go:89] found id: ""
	I1209 04:44:10.802798 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.802806 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:10.802811 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:10.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:10.833564 1620518 cri.go:89] found id: ""
	I1209 04:44:10.833579 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.833587 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:10.833592 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:10.833655 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:10.876749 1620518 cri.go:89] found id: ""
	I1209 04:44:10.876763 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.876770 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:10.876775 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:10.876832 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:10.907080 1620518 cri.go:89] found id: ""
	I1209 04:44:10.907093 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.907101 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:10.907106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:10.907164 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:10.932888 1620518 cri.go:89] found id: ""
	I1209 04:44:10.932903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.932910 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:10.932918 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:10.932928 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:10.998090 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:10.998113 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:11.016501 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:11.016518 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:11.083628 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:11.083645 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:11.083658 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:11.151855 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:11.151878 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:13.684470 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:13.694706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:13.694766 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:13.720935 1620518 cri.go:89] found id: ""
	I1209 04:44:13.720948 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.720955 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:13.720960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:13.721016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:13.750286 1620518 cri.go:89] found id: ""
	I1209 04:44:13.750299 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.750306 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:13.750314 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:13.750372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:13.774808 1620518 cri.go:89] found id: ""
	I1209 04:44:13.774822 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.774831 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:13.774836 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:13.774909 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:13.800153 1620518 cri.go:89] found id: ""
	I1209 04:44:13.800167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.800174 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:13.800180 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:13.800237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:13.833377 1620518 cri.go:89] found id: ""
	I1209 04:44:13.833402 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.833409 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:13.833415 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:13.833487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:13.863754 1620518 cri.go:89] found id: ""
	I1209 04:44:13.863767 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.863774 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:13.863780 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:13.863836 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:13.899969 1620518 cri.go:89] found id: ""
	I1209 04:44:13.899983 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.899990 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:13.899997 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:13.900008 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:13.964963 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:13.964983 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:13.980119 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:13.980136 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:14.051622 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:14.051632 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:14.051644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:14.120152 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:14.120171 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:16.651342 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:16.661695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:16.661769 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:16.688695 1620518 cri.go:89] found id: ""
	I1209 04:44:16.688709 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.688717 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:16.688724 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:16.688783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:16.714481 1620518 cri.go:89] found id: ""
	I1209 04:44:16.714495 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.714502 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:16.714507 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:16.714563 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:16.740966 1620518 cri.go:89] found id: ""
	I1209 04:44:16.740980 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.740987 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:16.740992 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:16.741048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:16.772332 1620518 cri.go:89] found id: ""
	I1209 04:44:16.772346 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.772353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:16.772358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:16.772429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:16.800889 1620518 cri.go:89] found id: ""
	I1209 04:44:16.800903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.800910 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:16.800916 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:16.800979 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:16.836688 1620518 cri.go:89] found id: ""
	I1209 04:44:16.836702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.836709 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:16.836715 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:16.836779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:16.877225 1620518 cri.go:89] found id: ""
	I1209 04:44:16.877238 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.877245 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:16.877253 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:16.877263 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:16.947272 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:16.947292 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:16.964059 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:16.964075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:17.033163 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:17.033172 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:17.033183 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:17.101285 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:17.101306 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:19.635736 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:19.645923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:19.645987 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:19.671893 1620518 cri.go:89] found id: ""
	I1209 04:44:19.671907 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.671913 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:19.671918 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:19.671975 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:19.697145 1620518 cri.go:89] found id: ""
	I1209 04:44:19.697159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.697166 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:19.697171 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:19.697228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:19.726050 1620518 cri.go:89] found id: ""
	I1209 04:44:19.726064 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.726072 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:19.726077 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:19.726135 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:19.753277 1620518 cri.go:89] found id: ""
	I1209 04:44:19.753290 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.753297 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:19.753302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:19.753364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:19.778375 1620518 cri.go:89] found id: ""
	I1209 04:44:19.778388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.778395 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:19.778410 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:19.778483 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:19.803668 1620518 cri.go:89] found id: ""
	I1209 04:44:19.803682 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.803690 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:19.803695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:19.803757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:19.841128 1620518 cri.go:89] found id: ""
	I1209 04:44:19.841142 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.841149 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:19.841157 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:19.841167 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:19.917953 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:19.917972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:19.933437 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:19.933455 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:20.001189 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:20.001200 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:20.001214 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:20.072973 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:20.072992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:22.607218 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:22.618312 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:22.618373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:22.644573 1620518 cri.go:89] found id: ""
	I1209 04:44:22.644587 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.644594 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:22.644600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:22.644669 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:22.671737 1620518 cri.go:89] found id: ""
	I1209 04:44:22.671751 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.671758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:22.671763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:22.671819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:22.697372 1620518 cri.go:89] found id: ""
	I1209 04:44:22.697386 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.697393 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:22.697398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:22.697456 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:22.724412 1620518 cri.go:89] found id: ""
	I1209 04:44:22.724428 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.724436 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:22.724448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:22.724512 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:22.757520 1620518 cri.go:89] found id: ""
	I1209 04:44:22.757533 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.757551 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:22.757556 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:22.757623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:22.787926 1620518 cri.go:89] found id: ""
	I1209 04:44:22.787939 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.787946 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:22.787951 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:22.788014 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:22.813253 1620518 cri.go:89] found id: ""
	I1209 04:44:22.813267 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.813284 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:22.813292 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:22.813303 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:22.889757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:22.889776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:22.905834 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:22.905850 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:22.976939 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:22.976949 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:22.976960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:23.044862 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:23.044881 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:25.578382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:25.589220 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:25.589287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:25.617854 1620518 cri.go:89] found id: ""
	I1209 04:44:25.617868 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.617875 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:25.617880 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:25.617937 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:25.642864 1620518 cri.go:89] found id: ""
	I1209 04:44:25.642883 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.642890 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:25.642895 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:25.642952 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:25.670199 1620518 cri.go:89] found id: ""
	I1209 04:44:25.670213 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.670220 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:25.670225 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:25.670283 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:25.697688 1620518 cri.go:89] found id: ""
	I1209 04:44:25.697702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.697720 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:25.697725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:25.697827 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:25.723203 1620518 cri.go:89] found id: ""
	I1209 04:44:25.723218 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.723225 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:25.723230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:25.723287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:25.752776 1620518 cri.go:89] found id: ""
	I1209 04:44:25.752790 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.752798 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:25.752803 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:25.752866 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:25.778450 1620518 cri.go:89] found id: ""
	I1209 04:44:25.778474 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.778483 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:25.778490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:25.778501 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:25.846732 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:25.846750 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:25.863685 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:25.863701 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:25.940317 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:25.940328 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:25.940339 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:26.013087 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:26.013109 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.543111 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:28.553653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:28.553717 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:28.579452 1620518 cri.go:89] found id: ""
	I1209 04:44:28.579465 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.579472 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:28.579478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:28.579542 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:28.605894 1620518 cri.go:89] found id: ""
	I1209 04:44:28.605909 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.605916 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:28.605921 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:28.605983 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:28.633021 1620518 cri.go:89] found id: ""
	I1209 04:44:28.633044 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.633051 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:28.633057 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:28.633129 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:28.657926 1620518 cri.go:89] found id: ""
	I1209 04:44:28.657946 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.657953 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:28.657959 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:28.658027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:28.685339 1620518 cri.go:89] found id: ""
	I1209 04:44:28.685353 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.685360 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:28.685366 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:28.685433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:28.718472 1620518 cri.go:89] found id: ""
	I1209 04:44:28.718485 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.718492 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:28.718498 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:28.718554 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:28.748502 1620518 cri.go:89] found id: ""
	I1209 04:44:28.748516 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.748523 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:28.748531 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:28.748543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:28.763578 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:28.763594 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:28.830210 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:28.830220 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:28.830231 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:28.905378 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:28.905401 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.934445 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:28.934466 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.501091 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:31.511589 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:31.511662 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:31.537954 1620518 cri.go:89] found id: ""
	I1209 04:44:31.537967 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.537974 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:31.537979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:31.538035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:31.563399 1620518 cri.go:89] found id: ""
	I1209 04:44:31.563412 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.563419 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:31.563424 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:31.563481 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:31.590727 1620518 cri.go:89] found id: ""
	I1209 04:44:31.590741 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.590748 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:31.590753 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:31.590817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:31.619991 1620518 cri.go:89] found id: ""
	I1209 04:44:31.620004 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.620012 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:31.620017 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:31.620073 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:31.646682 1620518 cri.go:89] found id: ""
	I1209 04:44:31.646695 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.646703 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:31.646709 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:31.646783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:31.676240 1620518 cri.go:89] found id: ""
	I1209 04:44:31.676254 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.676261 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:31.676266 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:31.676324 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:31.701874 1620518 cri.go:89] found id: ""
	I1209 04:44:31.701898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.701906 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:31.701914 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:31.701924 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:31.729913 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:31.729929 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.795202 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:31.795222 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:31.810455 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:31.810471 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:31.910056 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:31.910067 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:31.910079 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.486956 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:34.497309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:34.497372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:34.523236 1620518 cri.go:89] found id: ""
	I1209 04:44:34.523250 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.523257 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:34.523262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:34.523320 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:34.549906 1620518 cri.go:89] found id: ""
	I1209 04:44:34.549920 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.549935 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:34.549940 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:34.549997 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:34.577694 1620518 cri.go:89] found id: ""
	I1209 04:44:34.577708 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.577716 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:34.577721 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:34.577781 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:34.604297 1620518 cri.go:89] found id: ""
	I1209 04:44:34.604311 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.604319 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:34.604325 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:34.604388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:34.629233 1620518 cri.go:89] found id: ""
	I1209 04:44:34.629249 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.629257 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:34.629262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:34.629330 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:34.659380 1620518 cri.go:89] found id: ""
	I1209 04:44:34.659394 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.659401 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:34.659407 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:34.659466 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:34.688342 1620518 cri.go:89] found id: ""
	I1209 04:44:34.688356 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.688363 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:34.688370 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:34.688383 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:34.703538 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:34.703555 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:34.766893 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:34.766907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:34.766925 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.835016 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:34.835035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:34.867468 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:34.867484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.441777 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:37.452150 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:37.452220 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:37.477442 1620518 cri.go:89] found id: ""
	I1209 04:44:37.477456 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.477463 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:37.477468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:37.477525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:37.503669 1620518 cri.go:89] found id: ""
	I1209 04:44:37.503683 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.503690 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:37.503696 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:37.503756 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:37.529304 1620518 cri.go:89] found id: ""
	I1209 04:44:37.529318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.529326 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:37.529331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:37.529388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:37.555509 1620518 cri.go:89] found id: ""
	I1209 04:44:37.555523 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.555539 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:37.555545 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:37.555603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:37.581297 1620518 cri.go:89] found id: ""
	I1209 04:44:37.581310 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.581328 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:37.581334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:37.581403 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:37.607757 1620518 cri.go:89] found id: ""
	I1209 04:44:37.607774 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.607781 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:37.607787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:37.607863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:37.634135 1620518 cri.go:89] found id: ""
	I1209 04:44:37.634159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.634167 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:37.634174 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:37.634187 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:37.698412 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:37.698423 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:37.698434 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:37.765691 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:37.765711 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:37.794807 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:37.794822 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.865591 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:37.865609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.382843 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:40.393026 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:40.393086 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:40.417900 1620518 cri.go:89] found id: ""
	I1209 04:44:40.417913 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.417920 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:40.417926 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:40.417984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:40.447221 1620518 cri.go:89] found id: ""
	I1209 04:44:40.447235 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.447242 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:40.447247 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:40.447305 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:40.472564 1620518 cri.go:89] found id: ""
	I1209 04:44:40.472578 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.472585 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:40.472591 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:40.472651 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:40.498097 1620518 cri.go:89] found id: ""
	I1209 04:44:40.498111 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.498118 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:40.498123 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:40.498182 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:40.523258 1620518 cri.go:89] found id: ""
	I1209 04:44:40.523271 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.523279 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:40.523287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:40.523343 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:40.548390 1620518 cri.go:89] found id: ""
	I1209 04:44:40.548404 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.548411 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:40.548417 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:40.548475 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:40.573171 1620518 cri.go:89] found id: ""
	I1209 04:44:40.573185 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.573192 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:40.573199 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:40.573211 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.587922 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:40.587937 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:40.648925 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:40.648934 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:40.648945 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:40.721024 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:40.721047 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:40.756647 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:40.756664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.325607 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:43.335615 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:43.335677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:43.365344 1620518 cri.go:89] found id: ""
	I1209 04:44:43.365360 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.365367 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:43.365373 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:43.365432 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:43.391751 1620518 cri.go:89] found id: ""
	I1209 04:44:43.391764 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.391772 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:43.391783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:43.391843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:43.417345 1620518 cri.go:89] found id: ""
	I1209 04:44:43.417359 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.417366 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:43.417372 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:43.417433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:43.444314 1620518 cri.go:89] found id: ""
	I1209 04:44:43.444328 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.444335 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:43.444341 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:43.444402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:43.473635 1620518 cri.go:89] found id: ""
	I1209 04:44:43.473649 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.473656 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:43.473661 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:43.473721 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:43.499726 1620518 cri.go:89] found id: ""
	I1209 04:44:43.499740 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.499747 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:43.499752 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:43.499812 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:43.526373 1620518 cri.go:89] found id: ""
	I1209 04:44:43.526388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.526396 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:43.526404 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:43.526415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.591625 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:43.591644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:43.606802 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:43.606818 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:43.671535 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:43.671545 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:43.671556 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:43.742830 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:43.742849 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.272131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:46.282533 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:46.282611 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:46.307629 1620518 cri.go:89] found id: ""
	I1209 04:44:46.307644 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.307652 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:46.307657 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:46.307718 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:46.334241 1620518 cri.go:89] found id: ""
	I1209 04:44:46.334255 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.334262 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:46.334267 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:46.334326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:46.360606 1620518 cri.go:89] found id: ""
	I1209 04:44:46.360619 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.360627 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:46.360632 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:46.360693 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:46.391930 1620518 cri.go:89] found id: ""
	I1209 04:44:46.391944 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.391951 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:46.391956 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:46.392018 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:46.418088 1620518 cri.go:89] found id: ""
	I1209 04:44:46.418102 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.418109 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:46.418114 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:46.418173 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:46.444114 1620518 cri.go:89] found id: ""
	I1209 04:44:46.444129 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.444135 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:46.444141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:46.444202 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:46.469066 1620518 cri.go:89] found id: ""
	I1209 04:44:46.469079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.469096 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:46.469105 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:46.469116 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:46.535118 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:46.535128 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:46.535140 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:46.603490 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:46.603513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.633565 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:46.633582 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:46.707757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:46.707778 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.223668 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:49.233804 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:49.233863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:49.262060 1620518 cri.go:89] found id: ""
	I1209 04:44:49.262074 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.262081 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:49.262087 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:49.262146 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:49.288289 1620518 cri.go:89] found id: ""
	I1209 04:44:49.288303 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.288310 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:49.288315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:49.288372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:49.317469 1620518 cri.go:89] found id: ""
	I1209 04:44:49.317482 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.317489 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:49.317495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:49.317553 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:49.343598 1620518 cri.go:89] found id: ""
	I1209 04:44:49.343612 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.343619 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:49.343624 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:49.343682 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:49.369884 1620518 cri.go:89] found id: ""
	I1209 04:44:49.369898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.369905 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:49.369910 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:49.369968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:49.397485 1620518 cri.go:89] found id: ""
	I1209 04:44:49.397499 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.397506 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:49.397512 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:49.397576 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:49.426780 1620518 cri.go:89] found id: ""
	I1209 04:44:49.426794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.426802 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:49.426810 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:49.426820 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:49.455508 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:49.455524 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:49.521613 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:49.521632 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.537098 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:49.537115 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:49.604403 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:49.604415 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:49.604427 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.175474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:52.185416 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:52.185490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:52.210165 1620518 cri.go:89] found id: ""
	I1209 04:44:52.210179 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.210186 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:52.210191 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:52.210250 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:52.235252 1620518 cri.go:89] found id: ""
	I1209 04:44:52.235265 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.235272 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:52.235277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:52.235335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:52.260814 1620518 cri.go:89] found id: ""
	I1209 04:44:52.260828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.260835 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:52.260840 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:52.260899 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:52.287596 1620518 cri.go:89] found id: ""
	I1209 04:44:52.287609 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.287616 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:52.287621 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:52.287677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:52.315049 1620518 cri.go:89] found id: ""
	I1209 04:44:52.315062 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.315069 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:52.315075 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:52.315139 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:52.339741 1620518 cri.go:89] found id: ""
	I1209 04:44:52.339755 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.339762 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:52.339767 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:52.339825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:52.369959 1620518 cri.go:89] found id: ""
	I1209 04:44:52.369973 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.369981 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:52.369988 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:52.369998 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:52.442787 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:52.442797 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:52.442807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.511615 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:52.511634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:52.542801 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:52.542817 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:52.608882 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:52.608904 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.125120 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:55.135789 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:55.135848 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:55.162401 1620518 cri.go:89] found id: ""
	I1209 04:44:55.162416 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.162423 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:55.162428 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:55.162487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:55.190716 1620518 cri.go:89] found id: ""
	I1209 04:44:55.190730 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.190736 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:55.190742 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:55.190799 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:55.216812 1620518 cri.go:89] found id: ""
	I1209 04:44:55.216825 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.216832 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:55.216839 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:55.216896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:55.241064 1620518 cri.go:89] found id: ""
	I1209 04:44:55.241079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.241086 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:55.241092 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:55.241148 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:55.270237 1620518 cri.go:89] found id: ""
	I1209 04:44:55.270251 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.270258 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:55.270263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:55.270322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:55.296228 1620518 cri.go:89] found id: ""
	I1209 04:44:55.296242 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.296249 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:55.296254 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:55.296315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:55.322153 1620518 cri.go:89] found id: ""
	I1209 04:44:55.322167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.322174 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:55.322181 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:55.322192 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:55.390665 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:55.390684 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.405506 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:55.405523 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:55.471951 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:55.471960 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:55.471972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:55.542641 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:55.542662 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.078721 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:58.089961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:58.090029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:58.117883 1620518 cri.go:89] found id: ""
	I1209 04:44:58.117896 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.117902 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:58.117908 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:58.117968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:58.150212 1620518 cri.go:89] found id: ""
	I1209 04:44:58.150226 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.150233 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:58.150238 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:58.150296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:58.177448 1620518 cri.go:89] found id: ""
	I1209 04:44:58.177462 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.177469 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:58.177474 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:58.177533 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:58.203663 1620518 cri.go:89] found id: ""
	I1209 04:44:58.203676 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.203683 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:58.203688 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:58.203779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:58.229153 1620518 cri.go:89] found id: ""
	I1209 04:44:58.229167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.229174 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:58.229179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:58.229237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:58.253337 1620518 cri.go:89] found id: ""
	I1209 04:44:58.253365 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.253372 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:58.253377 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:58.253433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:58.279202 1620518 cri.go:89] found id: ""
	I1209 04:44:58.279215 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.279222 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:58.279230 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:58.279240 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:58.352607 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:58.352626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.380559 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:58.380575 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:58.450340 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:58.450359 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:58.466733 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:58.466753 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:58.539538 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.039807 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:01.051635 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:01.051699 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:01.081094 1620518 cri.go:89] found id: ""
	I1209 04:45:01.081120 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.081132 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:01.081138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:01.081216 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:01.110254 1620518 cri.go:89] found id: ""
	I1209 04:45:01.110270 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.110277 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:01.110282 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:01.110348 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:01.142200 1620518 cri.go:89] found id: ""
	I1209 04:45:01.142217 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.142224 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:01.142230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:01.142295 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:01.173624 1620518 cri.go:89] found id: ""
	I1209 04:45:01.173640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.173647 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:01.173653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:01.173714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:01.200655 1620518 cri.go:89] found id: ""
	I1209 04:45:01.200669 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.200676 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:01.200681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:01.200753 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:01.228245 1620518 cri.go:89] found id: ""
	I1209 04:45:01.228260 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.228268 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:01.228274 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:01.228344 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:01.255910 1620518 cri.go:89] found id: ""
	I1209 04:45:01.255924 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.255932 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:01.255941 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:01.255955 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:01.272811 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:01.272829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:01.345905 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.345916 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:01.345926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:01.428612 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:01.428634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:01.462789 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:01.462805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.036441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:04.048197 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:04.048263 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:04.077328 1620518 cri.go:89] found id: ""
	I1209 04:45:04.077347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.077354 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:04.077361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:04.077424 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:04.105221 1620518 cri.go:89] found id: ""
	I1209 04:45:04.105235 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.105243 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:04.105249 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:04.105315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:04.136847 1620518 cri.go:89] found id: ""
	I1209 04:45:04.136860 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.136868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:04.136873 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:04.136934 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:04.167906 1620518 cri.go:89] found id: ""
	I1209 04:45:04.167920 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.167930 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:04.167936 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:04.168012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:04.198111 1620518 cri.go:89] found id: ""
	I1209 04:45:04.198126 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.198133 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:04.198139 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:04.198201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:04.228375 1620518 cri.go:89] found id: ""
	I1209 04:45:04.228389 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.228396 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:04.228402 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:04.228460 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:04.255398 1620518 cri.go:89] found id: ""
	I1209 04:45:04.255411 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.255418 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:04.255425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:04.255436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:04.285882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:04.285898 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.352741 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:04.352763 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:04.369185 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:04.369202 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:04.440688 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:04.440698 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:04.440710 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.013764 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:07.024294 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:07.024356 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:07.050143 1620518 cri.go:89] found id: ""
	I1209 04:45:07.050157 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.050164 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:07.050170 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:07.050240 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:07.076876 1620518 cri.go:89] found id: ""
	I1209 04:45:07.076890 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.076897 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:07.076902 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:07.076957 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:07.102491 1620518 cri.go:89] found id: ""
	I1209 04:45:07.102505 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.102512 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:07.102517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:07.102597 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:07.132406 1620518 cri.go:89] found id: ""
	I1209 04:45:07.132421 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.132428 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:07.132432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:07.132489 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:07.158308 1620518 cri.go:89] found id: ""
	I1209 04:45:07.158322 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.158329 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:07.158334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:07.158394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:07.185219 1620518 cri.go:89] found id: ""
	I1209 04:45:07.185232 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.185240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:07.185245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:07.185304 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:07.211200 1620518 cri.go:89] found id: ""
	I1209 04:45:07.211213 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.211220 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:07.211227 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:07.211239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.279098 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:07.279117 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:07.307654 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:07.307669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:07.380382 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:07.380406 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:07.396198 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:07.396216 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:07.463840 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:09.964491 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:09.974856 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:09.974917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:10.013610 1620518 cri.go:89] found id: ""
	I1209 04:45:10.013627 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.013635 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:10.013641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:10.013710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:10.041923 1620518 cri.go:89] found id: ""
	I1209 04:45:10.041937 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.041945 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:10.041950 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:10.042012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:10.070273 1620518 cri.go:89] found id: ""
	I1209 04:45:10.070287 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.070295 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:10.070306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:10.070365 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:10.101336 1620518 cri.go:89] found id: ""
	I1209 04:45:10.101350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.101357 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:10.101362 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:10.101423 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:10.129685 1620518 cri.go:89] found id: ""
	I1209 04:45:10.129699 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.129706 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:10.129711 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:10.129770 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:10.157137 1620518 cri.go:89] found id: ""
	I1209 04:45:10.157151 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.157158 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:10.157164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:10.157223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:10.186869 1620518 cri.go:89] found id: ""
	I1209 04:45:10.186883 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.186891 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:10.186898 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:10.186912 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:10.217015 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:10.217032 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:10.284415 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:10.284437 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:10.299713 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:10.299729 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:10.383660 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:10.383683 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:10.383695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:12.956212 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:12.967122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:12.967187 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:12.992647 1620518 cri.go:89] found id: ""
	I1209 04:45:12.992661 1620518 logs.go:282] 0 containers: []
	W1209 04:45:12.992667 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:12.992673 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:12.992731 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:13.024601 1620518 cri.go:89] found id: ""
	I1209 04:45:13.024616 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.024623 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:13.024628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:13.024689 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:13.054508 1620518 cri.go:89] found id: ""
	I1209 04:45:13.054522 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.054529 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:13.054534 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:13.054612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:13.080662 1620518 cri.go:89] found id: ""
	I1209 04:45:13.080681 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.080688 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:13.080693 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:13.080750 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:13.112334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.112347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.112354 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:13.112363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:13.112421 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:13.141334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.141348 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.141355 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:13.141360 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:13.141433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:13.166692 1620518 cri.go:89] found id: ""
	I1209 04:45:13.166706 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.166713 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:13.166721 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:13.166735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:13.230693 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:13.230703 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:13.230718 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:13.299665 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:13.299685 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:13.343575 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:13.343591 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:13.418530 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:13.418550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:15.934049 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:15.944397 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:15.944459 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:15.969801 1620518 cri.go:89] found id: ""
	I1209 04:45:15.969814 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.969821 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:15.969827 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:15.969886 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:15.995679 1620518 cri.go:89] found id: ""
	I1209 04:45:15.995693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.995700 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:15.995705 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:15.995761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:16.029078 1620518 cri.go:89] found id: ""
	I1209 04:45:16.029092 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.029100 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:16.029105 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:16.029167 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:16.057686 1620518 cri.go:89] found id: ""
	I1209 04:45:16.057700 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.057707 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:16.057712 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:16.057773 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:16.085790 1620518 cri.go:89] found id: ""
	I1209 04:45:16.085804 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.085811 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:16.085816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:16.085876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:16.112272 1620518 cri.go:89] found id: ""
	I1209 04:45:16.112288 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.112295 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:16.112301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:16.112371 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:16.137697 1620518 cri.go:89] found id: ""
	I1209 04:45:16.137711 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.137718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:16.137726 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:16.137741 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:16.170480 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:16.170495 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:16.235651 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:16.235671 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:16.250648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:16.250664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:16.313079 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:16.313088 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:16.313099 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:18.888938 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:18.899614 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:18.899678 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:18.925762 1620518 cri.go:89] found id: ""
	I1209 04:45:18.925775 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.925782 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:18.925787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:18.925843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:18.952615 1620518 cri.go:89] found id: ""
	I1209 04:45:18.952629 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.952636 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:18.952641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:18.952703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:18.978511 1620518 cri.go:89] found id: ""
	I1209 04:45:18.978525 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.978532 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:18.978537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:18.978620 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:19.007151 1620518 cri.go:89] found id: ""
	I1209 04:45:19.007166 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.007173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:19.007183 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:19.007244 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:19.034621 1620518 cri.go:89] found id: ""
	I1209 04:45:19.034635 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.034643 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:19.034648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:19.034708 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:19.063843 1620518 cri.go:89] found id: ""
	I1209 04:45:19.063856 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.063863 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:19.063868 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:19.063929 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:19.090085 1620518 cri.go:89] found id: ""
	I1209 04:45:19.090099 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.090106 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:19.090114 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:19.090125 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:19.159590 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:19.159614 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:19.159626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:19.228469 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:19.228489 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:19.257518 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:19.257534 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:19.323776 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:19.323796 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:21.846133 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:21.856537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:21.856603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:21.883050 1620518 cri.go:89] found id: ""
	I1209 04:45:21.883071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.883079 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:21.883084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:21.883144 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:21.909529 1620518 cri.go:89] found id: ""
	I1209 04:45:21.909544 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.909551 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:21.909557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:21.909616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:21.935426 1620518 cri.go:89] found id: ""
	I1209 04:45:21.935440 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.935447 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:21.935452 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:21.935513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:21.964269 1620518 cri.go:89] found id: ""
	I1209 04:45:21.964283 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.964290 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:21.964295 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:21.964351 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:21.991621 1620518 cri.go:89] found id: ""
	I1209 04:45:21.991637 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.991644 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:21.991650 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:21.991710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:22.018422 1620518 cri.go:89] found id: ""
	I1209 04:45:22.018437 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.018445 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:22.018450 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:22.018510 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:22.045499 1620518 cri.go:89] found id: ""
	I1209 04:45:22.045514 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.045522 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:22.045529 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:22.045541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:22.111892 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:22.111907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:22.111923 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:22.180045 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:22.180065 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:22.210199 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:22.210215 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:22.276418 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:22.276439 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:24.791989 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:24.802138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:24.802199 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:24.830421 1620518 cri.go:89] found id: ""
	I1209 04:45:24.830434 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.830441 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:24.830446 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:24.830509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:24.855641 1620518 cri.go:89] found id: ""
	I1209 04:45:24.855653 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.855661 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:24.855666 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:24.855723 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:24.882261 1620518 cri.go:89] found id: ""
	I1209 04:45:24.882275 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.882282 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:24.882287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:24.882346 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:24.909451 1620518 cri.go:89] found id: ""
	I1209 04:45:24.909465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.909472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:24.909477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:24.909538 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:24.935023 1620518 cri.go:89] found id: ""
	I1209 04:45:24.935036 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.935043 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:24.935048 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:24.935105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:24.965362 1620518 cri.go:89] found id: ""
	I1209 04:45:24.965375 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.965390 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:24.965396 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:24.965454 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:24.993349 1620518 cri.go:89] found id: ""
	I1209 04:45:24.993362 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.993369 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:24.993377 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:24.993387 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.060817 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.060841 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.077397 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.077415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.149136 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.149146 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:25.149157 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:25.218866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.218886 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:27.749537 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:27.760277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:27.760345 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:27.786623 1620518 cri.go:89] found id: ""
	I1209 04:45:27.786636 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.786643 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:27.786648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:27.786705 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:27.813156 1620518 cri.go:89] found id: ""
	I1209 04:45:27.813169 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.813176 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:27.813181 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:27.813238 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:27.838803 1620518 cri.go:89] found id: ""
	I1209 04:45:27.838817 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.838824 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:27.838835 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:27.838896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:27.865975 1620518 cri.go:89] found id: ""
	I1209 04:45:27.865988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.865996 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:27.866001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:27.866058 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:27.891739 1620518 cri.go:89] found id: ""
	I1209 04:45:27.891753 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.891761 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:27.891766 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:27.891825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:27.922057 1620518 cri.go:89] found id: ""
	I1209 04:45:27.922071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.922079 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:27.922084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:27.922143 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:27.947345 1620518 cri.go:89] found id: ""
	I1209 04:45:27.947359 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.947366 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:27.947373 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:27.947384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:28.018760 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:28.018788 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:28.035483 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:28.035508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:28.104231 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:28.104241 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:28.104253 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:28.173176 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:28.173196 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:30.707635 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:30.717972 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:30.718036 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:30.743336 1620518 cri.go:89] found id: ""
	I1209 04:45:30.743350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.743357 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:30.743363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:30.743420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:30.768727 1620518 cri.go:89] found id: ""
	I1209 04:45:30.768741 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.768748 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:30.768754 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:30.768811 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:30.797959 1620518 cri.go:89] found id: ""
	I1209 04:45:30.797973 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.797980 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:30.797985 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:30.798046 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:30.825422 1620518 cri.go:89] found id: ""
	I1209 04:45:30.825435 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.825442 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:30.825448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:30.825506 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:30.854265 1620518 cri.go:89] found id: ""
	I1209 04:45:30.854278 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.854285 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:30.854290 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:30.854347 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:30.880403 1620518 cri.go:89] found id: ""
	I1209 04:45:30.880418 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.880426 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:30.880432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:30.880494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:30.913767 1620518 cri.go:89] found id: ""
	I1209 04:45:30.913781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.913789 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:30.913796 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:30.913807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:30.980378 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:30.980398 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:30.995822 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:30.995838 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:31.066169 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:31.066179 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:31.066190 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:31.138123 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:31.138142 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:33.670737 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:33.681036 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:33.681099 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:33.709926 1620518 cri.go:89] found id: ""
	I1209 04:45:33.709939 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.709947 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:33.709963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:33.710023 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:33.737554 1620518 cri.go:89] found id: ""
	I1209 04:45:33.737567 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.737574 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:33.737579 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:33.737640 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:33.763709 1620518 cri.go:89] found id: ""
	I1209 04:45:33.763723 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.763731 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:33.763736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:33.763794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:33.792885 1620518 cri.go:89] found id: ""
	I1209 04:45:33.792899 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.792906 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:33.792912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:33.792971 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:33.818657 1620518 cri.go:89] found id: ""
	I1209 04:45:33.818671 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.818678 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:33.818683 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:33.818741 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:33.845152 1620518 cri.go:89] found id: ""
	I1209 04:45:33.845167 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.845174 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:33.845179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:33.845237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:33.871504 1620518 cri.go:89] found id: ""
	I1209 04:45:33.871517 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.871524 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:33.871532 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:33.871543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:33.938353 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:33.938373 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:33.954248 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:33.954267 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:34.025014 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:34.025026 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:34.025038 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:34.096006 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:34.096027 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:36.630302 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:36.640925 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:36.640999 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:36.669961 1620518 cri.go:89] found id: ""
	I1209 04:45:36.669975 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.669982 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:36.669988 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:36.670044 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:36.696918 1620518 cri.go:89] found id: ""
	I1209 04:45:36.696934 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.696942 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:36.696947 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:36.697007 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:36.727113 1620518 cri.go:89] found id: ""
	I1209 04:45:36.727127 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.727136 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:36.727141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:36.727201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:36.752459 1620518 cri.go:89] found id: ""
	I1209 04:45:36.752473 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.752480 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:36.752485 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:36.752543 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:36.778403 1620518 cri.go:89] found id: ""
	I1209 04:45:36.778417 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.778425 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:36.778430 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:36.778488 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:36.809409 1620518 cri.go:89] found id: ""
	I1209 04:45:36.809423 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.809430 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:36.809436 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:36.809494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:36.838444 1620518 cri.go:89] found id: ""
	I1209 04:45:36.838457 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.838464 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:36.838472 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:36.838484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:36.853995 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:36.854011 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:36.919371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:36.919381 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:36.919395 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:36.992004 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:36.992025 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:37.033214 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:37.033230 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.602680 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:39.614476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:39.614537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:39.644626 1620518 cri.go:89] found id: ""
	I1209 04:45:39.644640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.644647 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:39.644652 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:39.644711 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:39.673317 1620518 cri.go:89] found id: ""
	I1209 04:45:39.673331 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.673338 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:39.673343 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:39.673404 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:39.699053 1620518 cri.go:89] found id: ""
	I1209 04:45:39.699067 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.699074 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:39.699079 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:39.699141 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:39.724341 1620518 cri.go:89] found id: ""
	I1209 04:45:39.724355 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.724362 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:39.724370 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:39.724429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:39.749975 1620518 cri.go:89] found id: ""
	I1209 04:45:39.749988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.749995 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:39.750001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:39.750060 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:39.774556 1620518 cri.go:89] found id: ""
	I1209 04:45:39.774588 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.774597 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:39.774602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:39.774663 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:39.800285 1620518 cri.go:89] found id: ""
	I1209 04:45:39.800299 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.800307 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:39.800314 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:39.800325 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:39.830073 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:39.830089 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.898438 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:39.898457 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:39.913743 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:39.913759 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:39.982308 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:39.982319 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:39.982332 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.561378 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:42.571315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:42.571383 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:42.602452 1620518 cri.go:89] found id: ""
	I1209 04:45:42.602466 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.602473 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:42.602478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:42.602541 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:42.634016 1620518 cri.go:89] found id: ""
	I1209 04:45:42.634029 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.634037 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:42.634042 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:42.634102 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:42.665601 1620518 cri.go:89] found id: ""
	I1209 04:45:42.665614 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.665621 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:42.665627 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:42.665683 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:42.692605 1620518 cri.go:89] found id: ""
	I1209 04:45:42.692618 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.692626 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:42.692631 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:42.692692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:42.719572 1620518 cri.go:89] found id: ""
	I1209 04:45:42.719585 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.719592 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:42.719598 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:42.719660 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:42.745298 1620518 cri.go:89] found id: ""
	I1209 04:45:42.745312 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.745319 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:42.745324 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:42.745391 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:42.770685 1620518 cri.go:89] found id: ""
	I1209 04:45:42.770698 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.770706 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:42.770714 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:42.770724 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.840866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:42.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:42.871659 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:42.871676 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:42.941154 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:42.941174 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:42.956621 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:42.956638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:43.026115 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.527782 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:45.537648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:45.537707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:45.564248 1620518 cri.go:89] found id: ""
	I1209 04:45:45.564263 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.564270 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:45.564277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:45.564337 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:45.599479 1620518 cri.go:89] found id: ""
	I1209 04:45:45.599492 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.599499 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:45.599504 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:45.599560 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:45.629541 1620518 cri.go:89] found id: ""
	I1209 04:45:45.629554 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.629563 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:45.629568 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:45.629624 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:45.660451 1620518 cri.go:89] found id: ""
	I1209 04:45:45.660465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.660472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:45.660477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:45.660537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:45.686489 1620518 cri.go:89] found id: ""
	I1209 04:45:45.686503 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.686509 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:45.686514 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:45.686616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:45.711940 1620518 cri.go:89] found id: ""
	I1209 04:45:45.711954 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.711961 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:45.711967 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:45.712025 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:45.737703 1620518 cri.go:89] found id: ""
	I1209 04:45:45.737717 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.737724 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:45.737732 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:45.737745 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:45.802439 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.802451 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:45.802474 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:45.871530 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:45.871550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:45.901994 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:45.902010 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:45.973222 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:45.973241 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.488532 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:48.499003 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:48.499072 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:48.524749 1620518 cri.go:89] found id: ""
	I1209 04:45:48.524762 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.524769 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:48.524774 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:48.524830 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:48.553895 1620518 cri.go:89] found id: ""
	I1209 04:45:48.553909 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.553917 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:48.553922 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:48.553984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:48.581047 1620518 cri.go:89] found id: ""
	I1209 04:45:48.581069 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.581078 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:48.581084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:48.581153 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:48.614680 1620518 cri.go:89] found id: ""
	I1209 04:45:48.614693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.614701 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:48.614706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:48.614774 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:48.643818 1620518 cri.go:89] found id: ""
	I1209 04:45:48.643832 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.643839 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:48.643845 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:48.643919 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:48.669618 1620518 cri.go:89] found id: ""
	I1209 04:45:48.669632 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.669642 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:48.669647 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:48.669710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:48.699049 1620518 cri.go:89] found id: ""
	I1209 04:45:48.699063 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.699070 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:48.699077 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:48.699088 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:48.731315 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:48.731331 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:48.798219 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:48.798239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.813603 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:48.813620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:48.877674 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:48.877684 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:48.877695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.447558 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:51.457634 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:51.457694 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:51.487281 1620518 cri.go:89] found id: ""
	I1209 04:45:51.487294 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.487301 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:51.487306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:51.487364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:51.518737 1620518 cri.go:89] found id: ""
	I1209 04:45:51.518751 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.518758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:51.518763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:51.518837 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:51.544469 1620518 cri.go:89] found id: ""
	I1209 04:45:51.544481 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.544488 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:51.544493 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:51.544549 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:51.569588 1620518 cri.go:89] found id: ""
	I1209 04:45:51.569602 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.569624 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:51.569628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:51.569687 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:51.612979 1620518 cri.go:89] found id: ""
	I1209 04:45:51.612992 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.612999 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:51.613004 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:51.613062 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:51.646866 1620518 cri.go:89] found id: ""
	I1209 04:45:51.646880 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.646886 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:51.646892 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:51.646954 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:51.672767 1620518 cri.go:89] found id: ""
	I1209 04:45:51.672781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.672788 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:51.672795 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:51.672805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:51.738601 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:51.738620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:51.753536 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:51.753553 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:51.823113 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:51.823124 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:51.823134 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.895060 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:51.895078 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.424057 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:54.434546 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:54.434637 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:54.461148 1620518 cri.go:89] found id: ""
	I1209 04:45:54.461161 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.461179 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:54.461185 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:54.461245 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:54.491296 1620518 cri.go:89] found id: ""
	I1209 04:45:54.491310 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.491316 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:54.491322 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:54.491377 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:54.517141 1620518 cri.go:89] found id: ""
	I1209 04:45:54.517155 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.517162 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:54.517168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:54.517228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:54.543226 1620518 cri.go:89] found id: ""
	I1209 04:45:54.543245 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.543252 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:54.543258 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:54.543318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:54.574984 1620518 cri.go:89] found id: ""
	I1209 04:45:54.574998 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.575005 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:54.575010 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:54.575069 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:54.612321 1620518 cri.go:89] found id: ""
	I1209 04:45:54.612335 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.612342 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:54.612347 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:54.612405 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:54.639817 1620518 cri.go:89] found id: ""
	I1209 04:45:54.639831 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.639839 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:54.639847 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:54.639858 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:54.704579 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:54.704588 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:54.704610 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:54.772943 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:54.772962 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.802082 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:54.802097 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:54.873250 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:54.873278 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.389092 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:57.399566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:57.399631 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:57.424671 1620518 cri.go:89] found id: ""
	I1209 04:45:57.424685 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.424692 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:57.424698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:57.424755 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:57.449520 1620518 cri.go:89] found id: ""
	I1209 04:45:57.449533 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.449549 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:57.449554 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:57.449612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:57.474934 1620518 cri.go:89] found id: ""
	I1209 04:45:57.474949 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.474956 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:57.474961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:57.475017 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:57.504272 1620518 cri.go:89] found id: ""
	I1209 04:45:57.504285 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.504292 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:57.504297 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:57.504355 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:57.530784 1620518 cri.go:89] found id: ""
	I1209 04:45:57.530797 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.530804 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:57.530820 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:57.530878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:57.556189 1620518 cri.go:89] found id: ""
	I1209 04:45:57.556202 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.556209 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:57.556214 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:57.556271 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:57.584245 1620518 cri.go:89] found id: ""
	I1209 04:45:57.584258 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.584266 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:57.584273 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:57.584286 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:57.618235 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:57.618250 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:57.693384 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:57.693403 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.708210 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:57.708227 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:57.773409 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:57.773420 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:57.773430 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:00.342809 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:00.358795 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:00.358876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:00.400877 1620518 cri.go:89] found id: ""
	I1209 04:46:00.400892 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.400900 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:00.400906 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:00.400970 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:00.431798 1620518 cri.go:89] found id: ""
	I1209 04:46:00.431813 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.431820 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:00.431828 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:00.431892 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:00.460666 1620518 cri.go:89] found id: ""
	I1209 04:46:00.460686 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.460693 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:00.460698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:00.460761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:00.488457 1620518 cri.go:89] found id: ""
	I1209 04:46:00.488471 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.488479 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:00.488484 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:00.488551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:00.517784 1620518 cri.go:89] found id: ""
	I1209 04:46:00.517797 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.517805 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:00.517810 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:00.517873 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:00.545946 1620518 cri.go:89] found id: ""
	I1209 04:46:00.545960 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.545968 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:00.545973 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:00.546035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:00.575131 1620518 cri.go:89] found id: ""
	I1209 04:46:00.575153 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.575161 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:00.575168 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:00.575179 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:00.612360 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:00.612379 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:00.689205 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:00.689224 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:00.704596 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:00.704612 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:00.770156 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:00.770165 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:00.770175 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.338719 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:03.349336 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:03.349402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:03.374937 1620518 cri.go:89] found id: ""
	I1209 04:46:03.374950 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.374957 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:03.374963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:03.375022 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:03.405176 1620518 cri.go:89] found id: ""
	I1209 04:46:03.405206 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.405213 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:03.405219 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:03.405285 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:03.434836 1620518 cri.go:89] found id: ""
	I1209 04:46:03.434860 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.434868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:03.434874 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:03.434948 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:03.464055 1620518 cri.go:89] found id: ""
	I1209 04:46:03.464077 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.464085 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:03.464090 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:03.464189 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:03.493083 1620518 cri.go:89] found id: ""
	I1209 04:46:03.493106 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.493114 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:03.493119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:03.493194 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:03.518929 1620518 cri.go:89] found id: ""
	I1209 04:46:03.518942 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.518950 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:03.518955 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:03.519016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:03.543738 1620518 cri.go:89] found id: ""
	I1209 04:46:03.543751 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.543758 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:03.543766 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:03.543776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.611972 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:03.611992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:03.644882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:03.644905 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:03.715853 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:03.715873 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:03.730852 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:03.730870 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:03.797963 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.299034 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:06.310369 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:06.310430 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:06.338011 1620518 cri.go:89] found id: ""
	I1209 04:46:06.338024 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.338031 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:06.338037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:06.338093 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:06.364537 1620518 cri.go:89] found id: ""
	I1209 04:46:06.364551 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.364558 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:06.364566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:06.364621 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:06.390874 1620518 cri.go:89] found id: ""
	I1209 04:46:06.390894 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.390907 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:06.390912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:06.390972 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:06.416068 1620518 cri.go:89] found id: ""
	I1209 04:46:06.416082 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.416088 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:06.416093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:06.416152 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:06.445711 1620518 cri.go:89] found id: ""
	I1209 04:46:06.445724 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.445731 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:06.445736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:06.445794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:06.472619 1620518 cri.go:89] found id: ""
	I1209 04:46:06.472632 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.472639 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:06.472644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:06.472704 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:06.501335 1620518 cri.go:89] found id: ""
	I1209 04:46:06.501348 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.501355 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:06.501372 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:06.501382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:06.564989 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.564998 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:06.565009 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:06.636608 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:06.636626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:06.667969 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:06.667986 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:06.734125 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:06.734145 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:09.249456 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:09.259765 1620518 kubeadm.go:602] duration metric: took 4m2.693827645s to restartPrimaryControlPlane
	W1209 04:46:09.259826 1620518 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:46:09.259905 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:46:09.672351 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:46:09.685870 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:46:09.693855 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:46:09.693913 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:46:09.701686 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:46:09.701697 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:46:09.701750 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:46:09.709486 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:46:09.709542 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:46:09.717080 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:46:09.724681 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:46:09.724735 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:46:09.732335 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.740201 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:46:09.740255 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.747717 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:46:09.755316 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:46:09.755370 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:46:09.762723 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:46:09.800341 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:46:09.800668 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:46:09.867665 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:46:09.867727 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:46:09.867766 1620518 kubeadm.go:319] OS: Linux
	I1209 04:46:09.867807 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:46:09.867852 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:46:09.867896 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:46:09.867942 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:46:09.867987 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:46:09.868032 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:46:09.868074 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:46:09.868120 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:46:09.868162 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:46:09.937281 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:46:09.937384 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:46:09.937481 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:46:09.947317 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:46:09.952721 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:46:09.952808 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:46:09.952877 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:46:09.952958 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:46:09.953021 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:46:09.953092 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:46:09.953141 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:46:09.953206 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:46:09.953269 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:46:09.953343 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:46:09.953417 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:46:09.953461 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:46:09.953513 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:46:10.029245 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:46:10.224354 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:46:10.667691 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:46:10.882600 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:46:11.073140 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:46:11.073694 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:46:11.076408 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:46:11.079859 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:46:11.079965 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:46:11.080042 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:46:11.080114 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:46:11.095853 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:46:11.095951 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:46:11.104994 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:46:11.105485 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:46:11.105715 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:46:11.236975 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:46:11.237088 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:50:11.237231 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000344141s
	I1209 04:50:11.237256 1620518 kubeadm.go:319] 
	I1209 04:50:11.237309 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:50:11.237340 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:50:11.237438 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:50:11.237443 1620518 kubeadm.go:319] 
	I1209 04:50:11.237541 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:50:11.237571 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:50:11.237600 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:50:11.237603 1620518 kubeadm.go:319] 
	I1209 04:50:11.241458 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:50:11.241910 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:50:11.242023 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:50:11.242266 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:50:11.242272 1620518 kubeadm.go:319] 
	I1209 04:50:11.242336 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 04:50:11.242454 1620518 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000344141s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:50:11.242544 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:50:11.655787 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:50:11.668676 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:50:11.668730 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:50:11.676546 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:50:11.676562 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:50:11.676612 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:50:11.684172 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:50:11.684236 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:50:11.691594 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:50:11.699302 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:50:11.699363 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:50:11.706772 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.714846 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:50:11.714902 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.722267 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:50:11.730186 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:50:11.730250 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:50:11.738143 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:50:11.781074 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:50:11.781123 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:50:11.856141 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:50:11.856206 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:50:11.856240 1620518 kubeadm.go:319] OS: Linux
	I1209 04:50:11.856283 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:50:11.856330 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:50:11.856377 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:50:11.856424 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:50:11.856471 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:50:11.856522 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:50:11.856566 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:50:11.856614 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:50:11.856660 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:50:11.927746 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:50:11.927875 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:50:11.927971 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:50:11.934983 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:50:11.938507 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:50:11.938697 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:50:11.938772 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:50:11.938867 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:50:11.938937 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:50:11.939018 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:50:11.939071 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:50:11.939143 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:50:11.939213 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:50:11.939302 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:50:11.939383 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:50:11.939690 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:50:11.939748 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:50:12.353584 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:50:12.812738 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:50:13.265058 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:50:13.417250 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:50:13.472548 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:50:13.473076 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:50:13.475724 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:50:13.478920 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:50:13.479026 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:50:13.479104 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:50:13.479930 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:50:13.496348 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:50:13.496458 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:50:13.504378 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:50:13.504655 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:50:13.504696 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:50:13.630713 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:50:13.630826 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:54:13.630972 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000259173s
	I1209 04:54:13.630997 1620518 kubeadm.go:319] 
	I1209 04:54:13.631053 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:54:13.631086 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:54:13.631200 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:54:13.631206 1620518 kubeadm.go:319] 
	I1209 04:54:13.631310 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:54:13.631395 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:54:13.631461 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:54:13.631466 1620518 kubeadm.go:319] 
	I1209 04:54:13.635649 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:54:13.636127 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:54:13.636242 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:54:13.636479 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:54:13.636485 1620518 kubeadm.go:319] 
	I1209 04:54:13.636553 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:54:13.636616 1620518 kubeadm.go:403] duration metric: took 12m7.110467735s to StartCluster
	I1209 04:54:13.636648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:54:13.636715 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:54:13.662011 1620518 cri.go:89] found id: ""
	I1209 04:54:13.662024 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.662032 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:54:13.662037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:54:13.662094 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:54:13.688278 1620518 cri.go:89] found id: ""
	I1209 04:54:13.688293 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.688299 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:54:13.688304 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:54:13.688363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:54:13.714700 1620518 cri.go:89] found id: ""
	I1209 04:54:13.714715 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.714723 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:54:13.714729 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:54:13.714795 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:54:13.740152 1620518 cri.go:89] found id: ""
	I1209 04:54:13.740166 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.740173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:54:13.740178 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:54:13.740235 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:54:13.766214 1620518 cri.go:89] found id: ""
	I1209 04:54:13.766227 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.766235 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:54:13.766240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:54:13.766300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:54:13.793141 1620518 cri.go:89] found id: ""
	I1209 04:54:13.793155 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.793162 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:54:13.793168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:54:13.793225 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:54:13.824264 1620518 cri.go:89] found id: ""
	I1209 04:54:13.824278 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.824286 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:54:13.824294 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:54:13.824305 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:54:13.865509 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:54:13.865527 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:54:13.944055 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:54:13.944075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:54:13.960571 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:54:13.960593 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:54:14.028160 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:54:14.028170 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:54:14.028180 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 04:54:14.099915 1620518 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:54:14.099962 1620518 out.go:285] * 
	W1209 04:54:14.100108 1620518 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.100197 1620518 out.go:285] * 
	W1209 04:54:14.102317 1620518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:54:14.107888 1620518 out.go:203] 
	W1209 04:54:14.111655 1620518 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.111892 1620518 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:54:14.111932 1620518 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:54:14.116964 1620518 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927580587Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927620637Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927668178Z" level=info msg="Create NRI interface"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927758033Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927766493Z" level=info msg="runtime interface created"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927780007Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927786308Z" level=info msg="runtime interface starting up..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927792741Z" level=info msg="starting plugins..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927805771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927872323Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:42:04 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.942951614Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d42015e0-8a7e-47f7-95a2-398ea8aa48f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.943749037Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=554d2336-7df0-4ab3-87a2-3f0040c79a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944291229Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=70fb14c4-f971-4387-8e1b-10c98c4791aa name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944730675Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=36db540a-ff25-4b5c-b7d7-cd7322fbd4bb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945138629Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7427d70a-8db2-44c3-88f8-0607ec671ff6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945576229Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b63b04fd-62c4-4cf0-9b5b-23eef2eb12c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.946074564Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=287329f7-949c-4b5b-8433-0437004398fd name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:56:27.888825   23403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:27.890290   23403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:27.890977   23403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:27.892614   23403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:27.893303   23403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:56:27 up  9:38,  0 user,  load average: 0.90, 0.36, 0.45
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:56:25 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:26 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 09 04:56:26 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:26 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:26 functional-331811 kubelet[23261]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:26 functional-331811 kubelet[23261]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:26 functional-331811 kubelet[23261]: E1209 04:56:26.148080   23261 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:26 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:26 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:26 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 09 04:56:26 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:26 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:26 functional-331811 kubelet[23298]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:26 functional-331811 kubelet[23298]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:26 functional-331811 kubelet[23298]: E1209 04:56:26.892817   23298 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:26 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:26 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:27 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 09 04:56:27 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:27 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:27 functional-331811 kubelet[23334]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:27 functional-331811 kubelet[23334]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:27 functional-331811 kubelet[23334]: E1209 04:56:27.636687   23334 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:27 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:27 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (354.773894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-331811 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-331811 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (59.934698ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-331811 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-331811 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-331811 describe po hello-node-connect: exit status 1 (57.841099ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-331811 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-331811 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-331811 logs -l app=hello-node-connect: exit status 1 (61.50935ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-331811 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-331811 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-331811 describe svc hello-node-connect: exit status 1 (63.267209ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-331811 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (331.969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-331811 cache reload                                                                                                                              │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ ssh     │ functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │ 09 Dec 25 04:41 UTC │
	│ kubectl │ functional-331811 kubectl -- --context functional-331811 get pods                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:41 UTC │                     │
	│ start   │ -p functional-331811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:42 UTC │                     │
	│ cp      │ functional-331811 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ config  │ functional-331811 config unset cpus                                                                                                                         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ config  │ functional-331811 config get cpus                                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │                     │
	│ config  │ functional-331811 config set cpus 2                                                                                                                         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ config  │ functional-331811 config get cpus                                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ config  │ functional-331811 config unset cpus                                                                                                                         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ ssh     │ functional-331811 ssh -n functional-331811 sudo cat /home/docker/cp-test.txt                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ config  │ functional-331811 config get cpus                                                                                                                           │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │                     │
	│ ssh     │ functional-331811 ssh echo hello                                                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ cp      │ functional-331811 cp functional-331811:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp691637449/001/cp-test.txt │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ ssh     │ functional-331811 ssh cat /etc/hostname                                                                                                                     │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ ssh     │ functional-331811 ssh -n functional-331811 sudo cat /home/docker/cp-test.txt                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ tunnel  │ functional-331811 tunnel --alsologtostderr                                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │                     │
	│ tunnel  │ functional-331811 tunnel --alsologtostderr                                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │                     │
	│ cp      │ functional-331811 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ tunnel  │ functional-331811 tunnel --alsologtostderr                                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │                     │
	│ ssh     │ functional-331811 ssh -n functional-331811 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:54 UTC │ 09 Dec 25 04:54 UTC │
	│ addons  │ functional-331811 addons list                                                                                                                               │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ addons  │ functional-331811 addons list -o json                                                                                                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:42:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:42:01.637786 1620518 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:42:01.637909 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.637913 1620518 out.go:374] Setting ErrFile to fd 2...
	I1209 04:42:01.637918 1620518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:42:01.638166 1620518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:42:01.638522 1620518 out.go:368] Setting JSON to false
	I1209 04:42:01.639450 1620518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":33862,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:42:01.639510 1620518 start.go:143] virtualization:  
	I1209 04:42:01.642955 1620518 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:42:01.646014 1620518 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:42:01.646101 1620518 notify.go:221] Checking for updates...
	I1209 04:42:01.651837 1620518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:42:01.654857 1620518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:42:01.657670 1620518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:42:01.660510 1620518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:42:01.663383 1620518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:42:01.666731 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:01.666828 1620518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:42:01.689070 1620518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:42:01.689175 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.744025 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.734708732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.744121 1620518 docker.go:319] overlay module found
	I1209 04:42:01.749121 1620518 out.go:179] * Using the docker driver based on existing profile
	I1209 04:42:01.751932 1620518 start.go:309] selected driver: docker
	I1209 04:42:01.751941 1620518 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.752051 1620518 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:42:01.752158 1620518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:42:01.824076 1620518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-09 04:42:01.81179321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:42:01.824456 1620518 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:42:01.824480 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:01.824537 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:01.824578 1620518 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:01.827700 1620518 out.go:179] * Starting "functional-331811" primary control-plane node in "functional-331811" cluster
	I1209 04:42:01.830624 1620518 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:42:01.833519 1620518 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:42:01.836178 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:01.836217 1620518 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:42:01.836228 1620518 cache.go:65] Caching tarball of preloaded images
	I1209 04:42:01.836255 1620518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:42:01.836324 1620518 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:42:01.836333 1620518 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 04:42:01.836451 1620518 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/config.json ...
	I1209 04:42:01.855430 1620518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:42:01.855441 1620518 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:42:01.855455 1620518 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:42:01.855485 1620518 start.go:360] acquireMachinesLock for functional-331811: {Name:mkd467b4f3dd08f05040481144eb7b6b1e27d3ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:42:01.855543 1620518 start.go:364] duration metric: took 40.87µs to acquireMachinesLock for "functional-331811"
	I1209 04:42:01.855566 1620518 start.go:96] Skipping create...Using existing machine configuration
	I1209 04:42:01.855570 1620518 fix.go:54] fixHost starting: 
	I1209 04:42:01.855819 1620518 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
	I1209 04:42:01.873325 1620518 fix.go:112] recreateIfNeeded on functional-331811: state=Running err=<nil>
	W1209 04:42:01.873351 1620518 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 04:42:01.876665 1620518 out.go:252] * Updating the running docker "functional-331811" container ...
	I1209 04:42:01.876693 1620518 machine.go:94] provisionDockerMachine start ...
	I1209 04:42:01.876797 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:01.894796 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:01.895121 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:01.895129 1620518 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:42:02.058680 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.058696 1620518 ubuntu.go:182] provisioning hostname "functional-331811"
	I1209 04:42:02.058761 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.090920 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.091365 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.091379 1620518 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-331811 && echo "functional-331811" | sudo tee /etc/hostname
	I1209 04:42:02.262883 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-331811
	
	I1209 04:42:02.262960 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.281315 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.281623 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.281637 1620518 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-331811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-331811/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-331811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:42:02.435135 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:42:02.435152 1620518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:42:02.435179 1620518 ubuntu.go:190] setting up certificates
	I1209 04:42:02.435197 1620518 provision.go:84] configureAuth start
	I1209 04:42:02.435267 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:02.452748 1620518 provision.go:143] copyHostCerts
	I1209 04:42:02.452806 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:42:02.452813 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:42:02.452891 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:42:02.452996 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:42:02.453000 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:42:02.453027 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:42:02.453088 1620518 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:42:02.453092 1620518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:42:02.453121 1620518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:42:02.453207 1620518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.functional-331811 san=[127.0.0.1 192.168.49.2 functional-331811 localhost minikube]
	I1209 04:42:02.729112 1620518 provision.go:177] copyRemoteCerts
	I1209 04:42:02.729174 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:42:02.729226 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.747750 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:02.856241 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:42:02.877475 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1209 04:42:02.898967 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:42:02.917189 1620518 provision.go:87] duration metric: took 481.970064ms to configureAuth
	I1209 04:42:02.917207 1620518 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:42:02.917407 1620518 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:42:02.917510 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:02.935642 1620518 main.go:143] libmachine: Using SSH client type: native
	I1209 04:42:02.935957 1620518 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34255 <nil> <nil>}
	I1209 04:42:02.935968 1620518 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:42:03.293502 1620518 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:42:03.293517 1620518 machine.go:97] duration metric: took 1.416817164s to provisionDockerMachine
	I1209 04:42:03.293527 1620518 start.go:293] postStartSetup for "functional-331811" (driver="docker")
	I1209 04:42:03.293537 1620518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:42:03.293597 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:42:03.293653 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.312696 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.419010 1620518 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:42:03.422897 1620518 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:42:03.422917 1620518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:42:03.422927 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:42:03.422995 1620518 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:42:03.423075 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:42:03.423167 1620518 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts -> hosts in /etc/test/nested/copy/1580521
	I1209 04:42:03.423212 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1580521
	I1209 04:42:03.431449 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:03.450423 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts --> /etc/test/nested/copy/1580521/hosts (40 bytes)
	I1209 04:42:03.470159 1620518 start.go:296] duration metric: took 176.617533ms for postStartSetup
	I1209 04:42:03.470235 1620518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:42:03.470292 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.488346 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.593519 1620518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:42:03.598841 1620518 fix.go:56] duration metric: took 1.743264094s for fixHost
	I1209 04:42:03.598859 1620518 start.go:83] releasing machines lock for "functional-331811", held for 1.743308418s
	I1209 04:42:03.598929 1620518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-331811
	I1209 04:42:03.617266 1620518 ssh_runner.go:195] Run: cat /version.json
	I1209 04:42:03.617315 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.617558 1620518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:42:03.617603 1620518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
	I1209 04:42:03.646611 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.653495 1620518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
	I1209 04:42:03.852499 1620518 ssh_runner.go:195] Run: systemctl --version
	I1209 04:42:03.859513 1620518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:42:03.897674 1620518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:42:03.902590 1620518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:42:03.902664 1620518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:42:03.911194 1620518 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 04:42:03.911208 1620518 start.go:496] detecting cgroup driver to use...
	I1209 04:42:03.911240 1620518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:42:03.911304 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:42:03.926479 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:42:03.940314 1620518 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:42:03.940374 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:42:03.956989 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:42:03.970857 1620518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:42:04.105722 1620518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:42:04.221024 1620518 docker.go:234] disabling docker service ...
	I1209 04:42:04.221082 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:42:04.236606 1620518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:42:04.259126 1620518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:42:04.406348 1620518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:42:04.537870 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:42:04.550770 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:42:04.565609 1620518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:42:04.565666 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.574449 1620518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:42:04.574512 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.583819 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.592696 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.601828 1620518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:42:04.610342 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.619401 1620518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.628176 1620518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:42:04.637069 1620518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:42:04.644806 1620518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:42:04.652309 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:04.767112 1620518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:42:04.935446 1620518 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:42:04.935507 1620518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:42:04.939304 1620518 start.go:564] Will wait 60s for crictl version
	I1209 04:42:04.939369 1620518 ssh_runner.go:195] Run: which crictl
	I1209 04:42:04.942772 1620518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:42:04.967172 1620518 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:42:04.967246 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.000450 1620518 ssh_runner.go:195] Run: crio --version
	I1209 04:42:05.039508 1620518 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 04:42:05.042351 1620518 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:42:05.058209 1620518 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:42:05.065398 1620518 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1209 04:42:05.068071 1620518 kubeadm.go:884] updating cluster {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:42:05.068222 1620518 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:42:05.068288 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.125308 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.125320 1620518 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:42:05.125384 1620518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:42:05.156125 1620518 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:42:05.156137 1620518 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:42:05.156143 1620518 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1209 04:42:05.156245 1620518 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-331811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:42:05.156329 1620518 ssh_runner.go:195] Run: crio config
	I1209 04:42:05.230295 1620518 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1209 04:42:05.230327 1620518 cni.go:84] Creating CNI manager for ""
	I1209 04:42:05.230335 1620518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:42:05.230348 1620518 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:42:05.230371 1620518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-331811 NodeName:functional-331811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:42:05.230520 1620518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-331811"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:42:05.230600 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 04:42:05.238799 1620518 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:42:05.238882 1620518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 04:42:05.246819 1620518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1209 04:42:05.260010 1620518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 04:42:05.273192 1620518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1209 04:42:05.287174 1620518 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 04:42:05.291010 1620518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:42:05.412581 1620518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:42:05.825078 1620518 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811 for IP: 192.168.49.2
	I1209 04:42:05.825089 1620518 certs.go:195] generating shared ca certs ...
	I1209 04:42:05.825104 1620518 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:42:05.825273 1620518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:42:05.825311 1620518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:42:05.825317 1620518 certs.go:257] generating profile certs ...
	I1209 04:42:05.825400 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.key
	I1209 04:42:05.825453 1620518 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key.29f4af34
	I1209 04:42:05.825489 1620518 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key
	I1209 04:42:05.825606 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:42:05.825637 1620518 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:42:05.825643 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:42:05.825670 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:42:05.825692 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:42:05.825717 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:42:05.825764 1620518 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:42:05.826339 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:42:05.847398 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:42:05.867264 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:42:05.887896 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:42:05.907076 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:42:05.926224 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:42:05.944236 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:42:05.962834 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:42:05.981333 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:42:06.001204 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:42:06.024226 1620518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:42:06.044638 1620518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:42:06.059443 1620518 ssh_runner.go:195] Run: openssl version
	I1209 04:42:06.066215 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.074237 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:42:06.083015 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087232 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.087310 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:42:06.129553 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:42:06.137400 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.144988 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:42:06.152871 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156811 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.156876 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:42:06.198268 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:42:06.205673 1620518 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.212766 1620518 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:42:06.220239 1620518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.223985 1620518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.224039 1620518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:42:06.265241 1620518 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:42:06.272666 1620518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:42:06.276249 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 04:42:06.318459 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 04:42:06.361504 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 04:42:06.402819 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 04:42:06.443793 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 04:42:06.485065 1620518 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 04:42:06.526159 1620518 kubeadm.go:401] StartCluster: {Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:42:06.526240 1620518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:42:06.526302 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.557743 1620518 cri.go:89] found id: ""
	I1209 04:42:06.557806 1620518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:42:06.565919 1620518 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 04:42:06.565929 1620518 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 04:42:06.565979 1620518 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 04:42:06.574421 1620518 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.574975 1620518 kubeconfig.go:125] found "functional-331811" server: "https://192.168.49.2:8441"
	I1209 04:42:06.576238 1620518 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 04:42:06.585800 1620518 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 04:27:27.994828232 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 04:42:05.282481991 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1209 04:42:06.585820 1620518 kubeadm.go:1161] stopping kube-system containers ...
	I1209 04:42:06.585830 1620518 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 04:42:06.585887 1620518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:42:06.615364 1620518 cri.go:89] found id: ""
	I1209 04:42:06.615424 1620518 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 04:42:06.632416 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:42:06.640276 1620518 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  9 04:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  9 04:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec  9 04:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  9 04:31 /etc/kubernetes/scheduler.conf
	
	I1209 04:42:06.640334 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:42:06.648234 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:42:06.655526 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.655581 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:42:06.663036 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.670853 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.670911 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:42:06.678990 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:42:06.687863 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:42:06.687915 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:42:06.696417 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:42:06.705368 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:06.756797 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.115058 1620518 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.358236541s)
	I1209 04:42:08.115116 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.320381 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.380846 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 04:42:08.425206 1620518 api_server.go:52] waiting for apiserver process to appear ...
	I1209 04:42:08.425277 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:08.925770 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.425673 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:09.926006 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.426138 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:10.926333 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.426044 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:11.925865 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.426407 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:12.925704 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.425999 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:13.926113 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.426341 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:14.926036 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.425471 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:15.926251 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.426322 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:16.925477 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.426300 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:17.926252 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.426140 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:18.925451 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.426343 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:19.925709 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.426256 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:20.925497 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.425570 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:21.926150 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.425937 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:22.926432 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.425437 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:23.926221 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.425823 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:24.926268 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.426017 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:25.926031 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.425377 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:26.925360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.425992 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:27.925571 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.425482 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:28.926361 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.426063 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:29.926242 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.425494 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:30.926061 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.425707 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:31.925370 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.426205 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:32.926119 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.426163 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:33.925480 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.425584 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:34.926360 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.426207 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:35.926064 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.426077 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:36.925371 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.426110 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:37.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.425443 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:38.926209 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.426345 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:39.925457 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.426372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:40.926174 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.426131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:41.926382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.426266 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:42.926376 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.425722 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:43.925468 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.425612 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:44.925853 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.425892 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:45.925441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.425589 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:46.926038 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.425591 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:47.926409 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.426312 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:48.925878 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.425458 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:49.925689 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.426143 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:50.926139 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.426335 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:51.926396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.425396 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:52.925485 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.425608 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:53.925545 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.425421 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:54.925703 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:55.925392 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.426241 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:56.925364 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.425372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:57.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.425848 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:58.925784 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.425624 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:42:59.925465 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.425417 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:00.926188 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.426323 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:01.925858 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.426311 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:02.925474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.425747 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:03.926082 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:04.925448 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.425655 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:05.925700 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.425472 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:06.926215 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.425795 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:07.925648 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:08.425431 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:08.425513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:08.451611 1620518 cri.go:89] found id: ""
	I1209 04:43:08.451625 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.451634 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:08.451644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:08.451703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:08.478028 1620518 cri.go:89] found id: ""
	I1209 04:43:08.478042 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.478049 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:08.478054 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:08.478116 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:08.504952 1620518 cri.go:89] found id: ""
	I1209 04:43:08.504967 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.504974 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:08.504980 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:08.505037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:08.531444 1620518 cri.go:89] found id: ""
	I1209 04:43:08.531460 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.531468 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:08.531473 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:08.531558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:08.557796 1620518 cri.go:89] found id: ""
	I1209 04:43:08.557810 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.557817 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:08.557822 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:08.557878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:08.589421 1620518 cri.go:89] found id: ""
	I1209 04:43:08.589436 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.589443 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:08.589448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:08.589505 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:08.626762 1620518 cri.go:89] found id: ""
	I1209 04:43:08.626776 1620518 logs.go:282] 0 containers: []
	W1209 04:43:08.626783 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:08.626792 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:08.626802 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:08.694456 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:08.694477 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:08.709310 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:08.709333 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:08.773551 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:08.764935   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.765641   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.766378   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.767874   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:08.768158   11065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:08.773573 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:08.773584 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:08.840868 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:08.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.374296 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:11.384818 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:11.384880 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:11.413700 1620518 cri.go:89] found id: ""
	I1209 04:43:11.413713 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.413720 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:11.413725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:11.413783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:11.439148 1620518 cri.go:89] found id: ""
	I1209 04:43:11.439163 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.439170 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:11.439175 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:11.439236 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:11.468833 1620518 cri.go:89] found id: ""
	I1209 04:43:11.468847 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.468854 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:11.468859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:11.468917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:11.501328 1620518 cri.go:89] found id: ""
	I1209 04:43:11.501343 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.501350 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:11.501355 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:11.501420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:11.527673 1620518 cri.go:89] found id: ""
	I1209 04:43:11.527687 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.527695 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:11.527700 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:11.527757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:11.552531 1620518 cri.go:89] found id: ""
	I1209 04:43:11.552545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.552552 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:11.552557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:11.552618 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:11.591493 1620518 cri.go:89] found id: ""
	I1209 04:43:11.591507 1620518 logs.go:282] 0 containers: []
	W1209 04:43:11.591514 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:11.591522 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:11.591538 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:11.626001 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:11.626017 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:11.699914 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:11.699939 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:11.715894 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:11.715917 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:11.780735 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:11.772451   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.773056   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.774787   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.775166   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:11.776611   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:11.780754 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:11.780765 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.352369 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:14.362558 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:14.362633 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:14.388407 1620518 cri.go:89] found id: ""
	I1209 04:43:14.388421 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.388428 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:14.388433 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:14.388490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:14.415937 1620518 cri.go:89] found id: ""
	I1209 04:43:14.415952 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.415960 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:14.415965 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:14.416029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:14.445418 1620518 cri.go:89] found id: ""
	I1209 04:43:14.445433 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.445440 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:14.445445 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:14.445513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:14.471362 1620518 cri.go:89] found id: ""
	I1209 04:43:14.471376 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.471383 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:14.471388 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:14.471452 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:14.503134 1620518 cri.go:89] found id: ""
	I1209 04:43:14.503148 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.503155 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:14.503160 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:14.503219 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:14.529790 1620518 cri.go:89] found id: ""
	I1209 04:43:14.529803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.529811 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:14.529816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:14.529889 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:14.555803 1620518 cri.go:89] found id: ""
	I1209 04:43:14.555817 1620518 logs.go:282] 0 containers: []
	W1209 04:43:14.555824 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:14.555832 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:14.555843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:14.632593 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:14.632611 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:14.648671 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:14.648687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:14.713371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:14.705883   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.706301   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.707740   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.708041   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:14.709450   11280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:14.713382 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:14.713400 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:14.783824 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:14.783843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.318936 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:17.329339 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:17.329407 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:17.356311 1620518 cri.go:89] found id: ""
	I1209 04:43:17.356330 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.356351 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:17.356356 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:17.356416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:17.386438 1620518 cri.go:89] found id: ""
	I1209 04:43:17.386452 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.386460 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:17.386465 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:17.386528 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:17.411209 1620518 cri.go:89] found id: ""
	I1209 04:43:17.411222 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.411229 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:17.411234 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:17.411291 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:17.437189 1620518 cri.go:89] found id: ""
	I1209 04:43:17.437201 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.437208 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:17.437229 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:17.437286 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:17.463836 1620518 cri.go:89] found id: ""
	I1209 04:43:17.463850 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.463857 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:17.463862 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:17.463945 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:17.490604 1620518 cri.go:89] found id: ""
	I1209 04:43:17.490617 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.490625 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:17.490630 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:17.490691 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:17.517583 1620518 cri.go:89] found id: ""
	I1209 04:43:17.517597 1620518 logs.go:282] 0 containers: []
	W1209 04:43:17.517605 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:17.517612 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:17.517623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:17.532622 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:17.532638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:17.611464 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:17.600424   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.601337   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605117   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.605586   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:17.607164   11377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:17.611477 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:17.611487 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:17.693672 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:17.693692 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:17.723232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:17.723249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.294145 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:20.304681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:20.304742 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:20.333282 1620518 cri.go:89] found id: ""
	I1209 04:43:20.333297 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.333304 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:20.333309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:20.333367 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:20.363210 1620518 cri.go:89] found id: ""
	I1209 04:43:20.363224 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.363231 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:20.363236 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:20.363300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:20.387964 1620518 cri.go:89] found id: ""
	I1209 04:43:20.387978 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.387985 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:20.387995 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:20.388054 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:20.414851 1620518 cri.go:89] found id: ""
	I1209 04:43:20.414864 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.414871 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:20.414876 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:20.414943 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:20.441500 1620518 cri.go:89] found id: ""
	I1209 04:43:20.441514 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.441521 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:20.441526 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:20.441584 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:20.468302 1620518 cri.go:89] found id: ""
	I1209 04:43:20.468318 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.468325 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:20.468331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:20.468393 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:20.497314 1620518 cri.go:89] found id: ""
	I1209 04:43:20.497328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:20.497345 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:20.497354 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:20.497364 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:20.570464 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:20.570492 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:20.586642 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:20.586660 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:20.665367 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:20.657066   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.657608   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659336   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.659839   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:20.661420   11489 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:20.665378 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:20.665389 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:20.733648 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:20.733669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.265697 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:23.275834 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:23.275893 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:23.304587 1620518 cri.go:89] found id: ""
	I1209 04:43:23.304613 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.304620 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:23.304626 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:23.304692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:23.329381 1620518 cri.go:89] found id: ""
	I1209 04:43:23.329406 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.329414 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:23.329419 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:23.329485 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:23.355201 1620518 cri.go:89] found id: ""
	I1209 04:43:23.355215 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.355222 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:23.355227 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:23.355289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:23.380238 1620518 cri.go:89] found id: ""
	I1209 04:43:23.380251 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.380258 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:23.380263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:23.380322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:23.409750 1620518 cri.go:89] found id: ""
	I1209 04:43:23.409764 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.409771 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:23.409776 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:23.409838 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:23.437575 1620518 cri.go:89] found id: ""
	I1209 04:43:23.437588 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.437595 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:23.437600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:23.437657 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:23.464403 1620518 cri.go:89] found id: ""
	I1209 04:43:23.464418 1620518 logs.go:282] 0 containers: []
	W1209 04:43:23.464425 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:23.464432 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:23.464444 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:23.479567 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:23.479583 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:23.543433 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:23.534948   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.535540   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537123   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.537643   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:23.539288   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:23.543443 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:23.543454 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:23.620689 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:23.620709 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:23.660232 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:23.660249 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.230943 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:26.242046 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:26.242107 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:26.269716 1620518 cri.go:89] found id: ""
	I1209 04:43:26.269729 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.269736 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:26.269741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:26.269798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:26.296756 1620518 cri.go:89] found id: ""
	I1209 04:43:26.296771 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.296778 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:26.296783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:26.296844 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:26.325789 1620518 cri.go:89] found id: ""
	I1209 04:43:26.325803 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.325810 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:26.325816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:26.325878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:26.362024 1620518 cri.go:89] found id: ""
	I1209 04:43:26.362037 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.362044 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:26.362049 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:26.362105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:26.389037 1620518 cri.go:89] found id: ""
	I1209 04:43:26.389051 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.389058 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:26.389063 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:26.389123 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:26.416773 1620518 cri.go:89] found id: ""
	I1209 04:43:26.416787 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.416794 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:26.416799 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:26.416854 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:26.442294 1620518 cri.go:89] found id: ""
	I1209 04:43:26.442308 1620518 logs.go:282] 0 containers: []
	W1209 04:43:26.442315 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:26.442323 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:26.442334 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:26.508604 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:26.508623 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:26.523993 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:26.524013 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:26.599795 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:26.590777   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.591488   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593176   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.593729   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:26.595401   11696 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:26.599816 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:26.599829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:26.676981 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:26.677003 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:29.206372 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:29.216486 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:29.216547 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:29.241737 1620518 cri.go:89] found id: ""
	I1209 04:43:29.241752 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.241759 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:29.241764 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:29.241819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:29.275909 1620518 cri.go:89] found id: ""
	I1209 04:43:29.275922 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.275929 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:29.275935 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:29.275993 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:29.300470 1620518 cri.go:89] found id: ""
	I1209 04:43:29.300483 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.300490 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:29.300495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:29.300552 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:29.326081 1620518 cri.go:89] found id: ""
	I1209 04:43:29.326094 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.326101 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:29.326106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:29.326166 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:29.353323 1620518 cri.go:89] found id: ""
	I1209 04:43:29.353337 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.353344 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:29.353349 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:29.353414 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:29.378490 1620518 cri.go:89] found id: ""
	I1209 04:43:29.378505 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.378512 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:29.378517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:29.378599 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:29.404558 1620518 cri.go:89] found id: ""
	I1209 04:43:29.404571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:29.404578 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:29.404585 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:29.404595 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:29.470257 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:29.470277 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:29.485347 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:29.485368 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:29.550659 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:29.541924   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.542686   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.544323   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.545085   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:29.546770   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:29.550676 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:29.550687 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:29.628618 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:29.628639 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:32.159988 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:32.170169 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:32.170227 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:32.195475 1620518 cri.go:89] found id: ""
	I1209 04:43:32.195489 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.195496 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:32.195502 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:32.195558 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:32.221067 1620518 cri.go:89] found id: ""
	I1209 04:43:32.221080 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.221088 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:32.221093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:32.221160 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:32.247302 1620518 cri.go:89] found id: ""
	I1209 04:43:32.247315 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.247322 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:32.247327 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:32.247388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:32.273214 1620518 cri.go:89] found id: ""
	I1209 04:43:32.273227 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.273234 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:32.273239 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:32.273296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:32.301827 1620518 cri.go:89] found id: ""
	I1209 04:43:32.301842 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.301849 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:32.301855 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:32.301920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:32.327504 1620518 cri.go:89] found id: ""
	I1209 04:43:32.327518 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.327526 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:32.327531 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:32.327592 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:32.354211 1620518 cri.go:89] found id: ""
	I1209 04:43:32.354225 1620518 logs.go:282] 0 containers: []
	W1209 04:43:32.354232 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:32.354240 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:32.354251 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:32.424906 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:32.424926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:32.440380 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:32.440396 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:32.508486 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:32.500209   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.500881   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.502632   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.503285   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:32.504430   11908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:32.508496 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:32.508506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:32.577521 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:32.577541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:35.111262 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:35.121574 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:35.121636 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:35.147108 1620518 cri.go:89] found id: ""
	I1209 04:43:35.147121 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.147128 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:35.147134 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:35.147193 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:35.172557 1620518 cri.go:89] found id: ""
	I1209 04:43:35.172571 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.172578 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:35.172583 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:35.172644 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:35.200994 1620518 cri.go:89] found id: ""
	I1209 04:43:35.201007 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.201020 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:35.201025 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:35.201082 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:35.230443 1620518 cri.go:89] found id: ""
	I1209 04:43:35.230457 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.230470 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:35.230476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:35.230536 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:35.255703 1620518 cri.go:89] found id: ""
	I1209 04:43:35.255716 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.255723 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:35.255728 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:35.255786 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:35.281749 1620518 cri.go:89] found id: ""
	I1209 04:43:35.281762 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.281780 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:35.281786 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:35.281852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:35.306677 1620518 cri.go:89] found id: ""
	I1209 04:43:35.306690 1620518 logs.go:282] 0 containers: []
	W1209 04:43:35.306697 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:35.306705 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:35.306715 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:35.375938 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:35.375957 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:35.390955 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:35.390984 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:35.457222 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:35.448756   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.449545   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451244   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.451795   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:35.453385   12014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:35.457240 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:35.457252 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:35.526131 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:35.526150 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:38.057096 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:38.068039 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:38.068101 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:38.097645 1620518 cri.go:89] found id: ""
	I1209 04:43:38.097659 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.097666 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:38.097672 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:38.097730 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:38.125024 1620518 cri.go:89] found id: ""
	I1209 04:43:38.125038 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.125045 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:38.125051 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:38.125106 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:38.158551 1620518 cri.go:89] found id: ""
	I1209 04:43:38.158565 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.158597 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:38.158602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:38.158667 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:38.185732 1620518 cri.go:89] found id: ""
	I1209 04:43:38.185746 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.185753 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:38.185758 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:38.185817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:38.211917 1620518 cri.go:89] found id: ""
	I1209 04:43:38.211931 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.211938 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:38.211944 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:38.212003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:38.242391 1620518 cri.go:89] found id: ""
	I1209 04:43:38.242407 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.242414 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:38.242420 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:38.242495 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:38.268565 1620518 cri.go:89] found id: ""
	I1209 04:43:38.268598 1620518 logs.go:282] 0 containers: []
	W1209 04:43:38.268606 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:38.268616 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:38.268628 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:38.335336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:38.335355 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:38.350651 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:38.350667 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:38.413931 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:38.405709   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.406404   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408105   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.408552   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:38.410061   12120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:38.413941 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:38.413952 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:38.481874 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:38.481894 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.013724 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:41.024462 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:41.024521 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:41.050950 1620518 cri.go:89] found id: ""
	I1209 04:43:41.050965 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.050973 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:41.050979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:41.051050 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:41.080781 1620518 cri.go:89] found id: ""
	I1209 04:43:41.080794 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.080801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:41.080806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:41.080864 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:41.107039 1620518 cri.go:89] found id: ""
	I1209 04:43:41.107053 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.107059 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:41.107064 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:41.107122 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:41.131302 1620518 cri.go:89] found id: ""
	I1209 04:43:41.131316 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.131323 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:41.131328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:41.131387 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:41.160541 1620518 cri.go:89] found id: ""
	I1209 04:43:41.160554 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.160560 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:41.160566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:41.160623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:41.189715 1620518 cri.go:89] found id: ""
	I1209 04:43:41.189728 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.189735 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:41.189741 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:41.189798 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:41.215532 1620518 cri.go:89] found id: ""
	I1209 04:43:41.215545 1620518 logs.go:282] 0 containers: []
	W1209 04:43:41.215552 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:41.215559 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:41.215570 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:41.248230 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:41.248245 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:41.316564 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:41.316589 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:41.332031 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:41.332048 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:41.399707 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:41.390550   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.391761   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393298   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.393745   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:41.395316   12237 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:41.399720 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:41.399733 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:43.973310 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:43.983577 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:43.983641 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:44.019270 1620518 cri.go:89] found id: ""
	I1209 04:43:44.019285 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.019292 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:44.019298 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:44.019362 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:44.046326 1620518 cri.go:89] found id: ""
	I1209 04:43:44.046340 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.046347 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:44.046353 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:44.046416 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:44.073718 1620518 cri.go:89] found id: ""
	I1209 04:43:44.073732 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.073739 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:44.073745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:44.073806 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:44.099804 1620518 cri.go:89] found id: ""
	I1209 04:43:44.099818 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.099825 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:44.099830 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:44.099888 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:44.125332 1620518 cri.go:89] found id: ""
	I1209 04:43:44.125346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.125353 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:44.125358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:44.125418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:44.153398 1620518 cri.go:89] found id: ""
	I1209 04:43:44.153413 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.153420 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:44.153438 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:44.153501 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:44.181868 1620518 cri.go:89] found id: ""
	I1209 04:43:44.181882 1620518 logs.go:282] 0 containers: []
	W1209 04:43:44.181889 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:44.181909 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:44.181919 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:44.197827 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:44.197843 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:44.262818 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:44.254312   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.255050   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.256717   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.257244   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:44.258990   12332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:44.262829 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:44.262840 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:44.331403 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:44.331423 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:44.363934 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:44.363951 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:46.935826 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:46.946383 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:46.946442 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:46.972025 1620518 cri.go:89] found id: ""
	I1209 04:43:46.972039 1620518 logs.go:282] 0 containers: []
	W1209 04:43:46.972046 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:46.972052 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:46.972114 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:47.005389 1620518 cri.go:89] found id: ""
	I1209 04:43:47.005411 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.005428 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:47.005434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:47.005503 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:47.034137 1620518 cri.go:89] found id: ""
	I1209 04:43:47.034151 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.034159 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:47.034164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:47.034224 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:47.060061 1620518 cri.go:89] found id: ""
	I1209 04:43:47.060074 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.060081 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:47.060086 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:47.060155 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:47.087325 1620518 cri.go:89] found id: ""
	I1209 04:43:47.087339 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.087346 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:47.087351 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:47.087412 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:47.113243 1620518 cri.go:89] found id: ""
	I1209 04:43:47.113257 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.113265 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:47.113271 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:47.113333 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:47.139697 1620518 cri.go:89] found id: ""
	I1209 04:43:47.139710 1620518 logs.go:282] 0 containers: []
	W1209 04:43:47.139718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:47.139725 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:47.139735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:47.208645 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:47.208665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:47.224099 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:47.224118 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:47.291121 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:47.282532   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.283245   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.284856   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.285413   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:47.287073   12440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:47.291131 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:47.291143 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:47.360007 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:47.360028 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:49.894321 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:49.904751 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:49.904813 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:49.933138 1620518 cri.go:89] found id: ""
	I1209 04:43:49.933152 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.933160 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:49.933165 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:49.933223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:49.959143 1620518 cri.go:89] found id: ""
	I1209 04:43:49.959156 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.959163 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:49.959174 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:49.959231 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:49.984103 1620518 cri.go:89] found id: ""
	I1209 04:43:49.984118 1620518 logs.go:282] 0 containers: []
	W1209 04:43:49.984125 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:49.984130 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:49.984188 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:50.019299 1620518 cri.go:89] found id: ""
	I1209 04:43:50.019314 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.019322 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:50.019328 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:50.019394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:50.050759 1620518 cri.go:89] found id: ""
	I1209 04:43:50.050773 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.050780 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:50.050785 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:50.050852 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:50.077915 1620518 cri.go:89] found id: ""
	I1209 04:43:50.077929 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.077937 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:50.077942 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:50.078003 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:50.105340 1620518 cri.go:89] found id: ""
	I1209 04:43:50.105354 1620518 logs.go:282] 0 containers: []
	W1209 04:43:50.105361 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:50.105369 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:50.105382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:50.176940 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:50.168731   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.169401   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171044   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.171455   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:50.173045   12535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:50.176950 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:50.176961 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:50.250014 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:50.250035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:50.279274 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:50.279290 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:50.344336 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:50.344354 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:52.861162 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:52.873255 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:52.873331 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:52.901728 1620518 cri.go:89] found id: ""
	I1209 04:43:52.901743 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.901750 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:52.901756 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:52.901847 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:52.927167 1620518 cri.go:89] found id: ""
	I1209 04:43:52.927180 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.927187 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:52.927192 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:52.927252 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:52.953243 1620518 cri.go:89] found id: ""
	I1209 04:43:52.953256 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.953263 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:52.953268 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:52.953326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:52.981127 1620518 cri.go:89] found id: ""
	I1209 04:43:52.981140 1620518 logs.go:282] 0 containers: []
	W1209 04:43:52.981147 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:52.981152 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:52.981210 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:53.014584 1620518 cri.go:89] found id: ""
	I1209 04:43:53.014600 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.014608 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:53.014613 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:53.014681 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:53.041932 1620518 cri.go:89] found id: ""
	I1209 04:43:53.041946 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.041954 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:53.041960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:53.042027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:53.068705 1620518 cri.go:89] found id: ""
	I1209 04:43:53.068719 1620518 logs.go:282] 0 containers: []
	W1209 04:43:53.068725 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:53.068733 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:53.068749 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:53.097490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:53.097506 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:53.162858 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:53.162879 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:53.177170 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:53.177185 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:53.240297 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:53.232197   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.232986   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234644   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.234971   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:53.236396   12657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:53.240307 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:53.240320 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:55.810542 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:55.820923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:55.820985 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:55.860409 1620518 cri.go:89] found id: ""
	I1209 04:43:55.860422 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.860429 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:55.860434 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:55.860491 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:55.895639 1620518 cri.go:89] found id: ""
	I1209 04:43:55.895653 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.895660 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:55.895665 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:55.895729 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:55.922274 1620518 cri.go:89] found id: ""
	I1209 04:43:55.922289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.922297 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:55.922302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:55.922366 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:55.948415 1620518 cri.go:89] found id: ""
	I1209 04:43:55.948437 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.948444 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:55.948448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:55.948509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:55.977442 1620518 cri.go:89] found id: ""
	I1209 04:43:55.977456 1620518 logs.go:282] 0 containers: []
	W1209 04:43:55.977463 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:55.977468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:55.977525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:56.006812 1620518 cri.go:89] found id: ""
	I1209 04:43:56.006827 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.006835 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:56.006841 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:56.006920 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:56.035113 1620518 cri.go:89] found id: ""
	I1209 04:43:56.035128 1620518 logs.go:282] 0 containers: []
	W1209 04:43:56.035135 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:56.035143 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:56.035161 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:56.108405 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:56.099799   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.100584   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102265   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.102913   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:56.104653   12746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:56.108424 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:56.108435 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:56.178263 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:56.178284 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:56.211498 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:56.211513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:56.278845 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:56.278867 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:43:58.794283 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:43:58.804745 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:43:58.804805 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:43:58.836463 1620518 cri.go:89] found id: ""
	I1209 04:43:58.836482 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.836489 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:43:58.836494 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:43:58.836551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:43:58.871008 1620518 cri.go:89] found id: ""
	I1209 04:43:58.871021 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.871028 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:43:58.871033 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:43:58.871096 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:43:58.904275 1620518 cri.go:89] found id: ""
	I1209 04:43:58.904289 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.904296 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:43:58.904301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:43:58.904363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:43:58.934333 1620518 cri.go:89] found id: ""
	I1209 04:43:58.934346 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.934353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:43:58.934361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:43:58.934418 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:43:58.961476 1620518 cri.go:89] found id: ""
	I1209 04:43:58.961490 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.961497 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:43:58.961503 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:43:58.961562 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:43:58.987249 1620518 cri.go:89] found id: ""
	I1209 04:43:58.987263 1620518 logs.go:282] 0 containers: []
	W1209 04:43:58.987270 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:43:58.987276 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:43:58.987335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:43:59.015314 1620518 cri.go:89] found id: ""
	I1209 04:43:59.015328 1620518 logs.go:282] 0 containers: []
	W1209 04:43:59.015335 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:43:59.015342 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:43:59.015353 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:43:59.079415 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:43:59.070310   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.071244   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073057   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.073701   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:43:59.075400   12855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:43:59.079425 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:43:59.079436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:43:59.150742 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:43:59.150761 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:43:59.180649 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:43:59.180665 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:43:59.248002 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:43:59.248020 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:01.763804 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:01.774240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:01.774302 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:01.802786 1620518 cri.go:89] found id: ""
	I1209 04:44:01.802800 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.802808 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:01.802813 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:01.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:01.842779 1620518 cri.go:89] found id: ""
	I1209 04:44:01.842794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.842801 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:01.842806 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:01.842867 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:01.874062 1620518 cri.go:89] found id: ""
	I1209 04:44:01.874081 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.874088 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:01.874093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:01.874157 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:01.903692 1620518 cri.go:89] found id: ""
	I1209 04:44:01.903706 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.903713 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:01.903718 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:01.903777 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:01.933430 1620518 cri.go:89] found id: ""
	I1209 04:44:01.933444 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.933451 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:01.933456 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:01.933515 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:01.961286 1620518 cri.go:89] found id: ""
	I1209 04:44:01.961300 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.961307 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:01.961313 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:01.961373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:01.990521 1620518 cri.go:89] found id: ""
	I1209 04:44:01.990535 1620518 logs.go:282] 0 containers: []
	W1209 04:44:01.990542 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:01.990550 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:01.990561 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:02.008959 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:02.008977 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:02.076349 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:02.067978   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.068680   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070314   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.070881   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:02.072482   12964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:02.076359 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:02.076370 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:02.144940 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:02.144960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:02.175776 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:02.175793 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:04.751592 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:04.762232 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:04.762298 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:04.788096 1620518 cri.go:89] found id: ""
	I1209 04:44:04.788110 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.788117 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:04.788122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:04.788184 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:04.829955 1620518 cri.go:89] found id: ""
	I1209 04:44:04.829969 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.829975 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:04.829981 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:04.830037 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:04.869304 1620518 cri.go:89] found id: ""
	I1209 04:44:04.869318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.869325 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:04.869330 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:04.869389 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:04.900033 1620518 cri.go:89] found id: ""
	I1209 04:44:04.900048 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.900054 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:04.900060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:04.900118 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:04.926358 1620518 cri.go:89] found id: ""
	I1209 04:44:04.926373 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.926381 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:04.926386 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:04.926446 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:04.952219 1620518 cri.go:89] found id: ""
	I1209 04:44:04.952233 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.952240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:04.952245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:04.952318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:04.981606 1620518 cri.go:89] found id: ""
	I1209 04:44:04.981633 1620518 logs.go:282] 0 containers: []
	W1209 04:44:04.981640 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:04.981648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:04.981659 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:05.054363 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:05.045151   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.046053   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.047917   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.048288   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:05.049848   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:05.054374 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:05.054384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:05.123486 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:05.123508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:05.153591 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:05.153609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:05.220156 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:05.220176 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:07.735728 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:07.746784 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:07.746849 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:07.773633 1620518 cri.go:89] found id: ""
	I1209 04:44:07.773646 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.773653 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:07.773658 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:07.773714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:07.799209 1620518 cri.go:89] found id: ""
	I1209 04:44:07.799222 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.799230 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:07.799235 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:07.799289 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:07.833034 1620518 cri.go:89] found id: ""
	I1209 04:44:07.833047 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.833055 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:07.833060 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:07.833117 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:07.861960 1620518 cri.go:89] found id: ""
	I1209 04:44:07.861979 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.861986 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:07.861991 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:07.862048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:07.891370 1620518 cri.go:89] found id: ""
	I1209 04:44:07.891384 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.891392 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:07.891398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:07.891499 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:07.925093 1620518 cri.go:89] found id: ""
	I1209 04:44:07.925106 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.925113 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:07.925119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:07.925179 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:07.953814 1620518 cri.go:89] found id: ""
	I1209 04:44:07.953828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:07.953845 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:07.953853 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:07.953863 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:08.019480 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:08.019500 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:08.035405 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:08.035420 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:08.103942 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:08.095426   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.096263   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.097939   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.098274   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:08.099807   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:08.103951 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:08.103964 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:08.173425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:08.173447 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:10.707757 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:10.717859 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:10.717922 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:10.743691 1620518 cri.go:89] found id: ""
	I1209 04:44:10.743705 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.743712 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:10.743717 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:10.743775 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:10.769622 1620518 cri.go:89] found id: ""
	I1209 04:44:10.769636 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.769643 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:10.769648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:10.769707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:10.802785 1620518 cri.go:89] found id: ""
	I1209 04:44:10.802798 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.802806 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:10.802811 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:10.802870 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:10.833564 1620518 cri.go:89] found id: ""
	I1209 04:44:10.833579 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.833587 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:10.833592 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:10.833655 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:10.876749 1620518 cri.go:89] found id: ""
	I1209 04:44:10.876763 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.876770 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:10.876775 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:10.876832 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:10.907080 1620518 cri.go:89] found id: ""
	I1209 04:44:10.907093 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.907101 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:10.907106 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:10.907164 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:10.932888 1620518 cri.go:89] found id: ""
	I1209 04:44:10.932903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:10.932910 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:10.932918 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:10.932928 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:10.998090 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:10.998113 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:11.016501 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:11.016518 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:11.083628 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:11.075111   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.075522   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077185   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.077924   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:11.079551   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:11.083645 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:11.083658 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:11.151855 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:11.151878 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:13.684470 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:13.694706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:13.694766 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:13.720935 1620518 cri.go:89] found id: ""
	I1209 04:44:13.720948 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.720955 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:13.720960 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:13.721016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:13.750286 1620518 cri.go:89] found id: ""
	I1209 04:44:13.750299 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.750306 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:13.750314 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:13.750372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:13.774808 1620518 cri.go:89] found id: ""
	I1209 04:44:13.774822 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.774831 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:13.774836 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:13.774909 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:13.800153 1620518 cri.go:89] found id: ""
	I1209 04:44:13.800167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.800174 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:13.800180 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:13.800237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:13.833377 1620518 cri.go:89] found id: ""
	I1209 04:44:13.833402 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.833409 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:13.833415 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:13.833487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:13.863754 1620518 cri.go:89] found id: ""
	I1209 04:44:13.863767 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.863774 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:13.863780 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:13.863836 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:13.899969 1620518 cri.go:89] found id: ""
	I1209 04:44:13.899983 1620518 logs.go:282] 0 containers: []
	W1209 04:44:13.899990 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:13.899997 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:13.900008 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:13.964963 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:13.964983 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:13.980119 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:13.980136 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:14.051622 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:14.042651   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.043666   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045214   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.045756   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:14.047568   13381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:14.051632 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:14.051644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:14.120152 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:14.120171 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:16.651342 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:16.661695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:16.661769 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:16.688695 1620518 cri.go:89] found id: ""
	I1209 04:44:16.688709 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.688717 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:16.688724 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:16.688783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:16.714481 1620518 cri.go:89] found id: ""
	I1209 04:44:16.714495 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.714502 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:16.714507 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:16.714563 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:16.740966 1620518 cri.go:89] found id: ""
	I1209 04:44:16.740980 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.740987 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:16.740992 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:16.741048 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:16.772332 1620518 cri.go:89] found id: ""
	I1209 04:44:16.772346 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.772353 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:16.772358 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:16.772429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:16.800889 1620518 cri.go:89] found id: ""
	I1209 04:44:16.800903 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.800910 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:16.800916 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:16.800979 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:16.836688 1620518 cri.go:89] found id: ""
	I1209 04:44:16.836702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.836709 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:16.836715 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:16.836779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:16.877225 1620518 cri.go:89] found id: ""
	I1209 04:44:16.877238 1620518 logs.go:282] 0 containers: []
	W1209 04:44:16.877245 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:16.877253 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:16.877263 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:16.947272 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:16.947292 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:16.964059 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:16.964075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:17.033163 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:17.024900   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.025556   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027130   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.027646   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:17.029289   13483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:17.033172 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:17.033183 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:17.101285 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:17.101306 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:19.635736 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:19.645923 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:19.645987 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:19.671893 1620518 cri.go:89] found id: ""
	I1209 04:44:19.671907 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.671913 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:19.671918 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:19.671975 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:19.697145 1620518 cri.go:89] found id: ""
	I1209 04:44:19.697159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.697166 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:19.697171 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:19.697228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:19.726050 1620518 cri.go:89] found id: ""
	I1209 04:44:19.726064 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.726072 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:19.726077 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:19.726135 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:19.753277 1620518 cri.go:89] found id: ""
	I1209 04:44:19.753290 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.753297 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:19.753302 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:19.753364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:19.778375 1620518 cri.go:89] found id: ""
	I1209 04:44:19.778388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.778395 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:19.778410 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:19.778483 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:19.803668 1620518 cri.go:89] found id: ""
	I1209 04:44:19.803682 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.803690 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:19.803695 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:19.803757 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:19.841128 1620518 cri.go:89] found id: ""
	I1209 04:44:19.841142 1620518 logs.go:282] 0 containers: []
	W1209 04:44:19.841149 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:19.841157 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:19.841167 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:19.917953 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:19.917972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:19.933437 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:19.933455 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:20.001189 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:19.992491   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.992867   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994437   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.994797   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:19.996244   13588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:20.001200 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:20.001214 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:20.072973 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:20.072992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:22.607218 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:22.618312 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:22.618373 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:22.644573 1620518 cri.go:89] found id: ""
	I1209 04:44:22.644587 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.644594 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:22.644600 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:22.644669 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:22.671737 1620518 cri.go:89] found id: ""
	I1209 04:44:22.671751 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.671758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:22.671763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:22.671819 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:22.697372 1620518 cri.go:89] found id: ""
	I1209 04:44:22.697386 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.697393 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:22.697398 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:22.697456 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:22.724412 1620518 cri.go:89] found id: ""
	I1209 04:44:22.724428 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.724436 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:22.724448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:22.724512 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:22.757520 1620518 cri.go:89] found id: ""
	I1209 04:44:22.757533 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.757551 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:22.757556 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:22.757623 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:22.787926 1620518 cri.go:89] found id: ""
	I1209 04:44:22.787939 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.787946 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:22.787951 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:22.788014 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:22.813253 1620518 cri.go:89] found id: ""
	I1209 04:44:22.813267 1620518 logs.go:282] 0 containers: []
	W1209 04:44:22.813284 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:22.813292 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:22.813303 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:22.889757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:22.889776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:22.905834 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:22.905850 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:22.976939 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:22.967912   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.968798   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.970382   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.971021   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:22.972529   13692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:22.976949 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:22.976960 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:23.044862 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:23.044881 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:25.578382 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:25.589220 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:25.589287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:25.617854 1620518 cri.go:89] found id: ""
	I1209 04:44:25.617868 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.617875 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:25.617880 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:25.617937 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:25.642864 1620518 cri.go:89] found id: ""
	I1209 04:44:25.642883 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.642890 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:25.642895 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:25.642952 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:25.670199 1620518 cri.go:89] found id: ""
	I1209 04:44:25.670213 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.670220 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:25.670225 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:25.670283 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:25.697688 1620518 cri.go:89] found id: ""
	I1209 04:44:25.697702 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.697720 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:25.697725 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:25.697827 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:25.723203 1620518 cri.go:89] found id: ""
	I1209 04:44:25.723218 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.723225 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:25.723230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:25.723287 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:25.752776 1620518 cri.go:89] found id: ""
	I1209 04:44:25.752790 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.752798 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:25.752803 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:25.752866 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:25.778450 1620518 cri.go:89] found id: ""
	I1209 04:44:25.778474 1620518 logs.go:282] 0 containers: []
	W1209 04:44:25.778483 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:25.778490 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:25.778501 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:25.846732 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:25.846750 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:25.863685 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:25.863701 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:25.940317 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:25.931569   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.932325   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934011   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.934352   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:25.936136   13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:25.940328 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:25.940339 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:26.013087 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:26.013109 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.543111 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:28.553653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:28.553717 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:28.579452 1620518 cri.go:89] found id: ""
	I1209 04:44:28.579465 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.579472 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:28.579478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:28.579542 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:28.605894 1620518 cri.go:89] found id: ""
	I1209 04:44:28.605909 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.605916 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:28.605921 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:28.605983 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:28.633021 1620518 cri.go:89] found id: ""
	I1209 04:44:28.633044 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.633051 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:28.633057 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:28.633129 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:28.657926 1620518 cri.go:89] found id: ""
	I1209 04:44:28.657946 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.657953 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:28.657959 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:28.658027 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:28.685339 1620518 cri.go:89] found id: ""
	I1209 04:44:28.685353 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.685360 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:28.685366 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:28.685433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:28.718472 1620518 cri.go:89] found id: ""
	I1209 04:44:28.718485 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.718492 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:28.718498 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:28.718554 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:28.748502 1620518 cri.go:89] found id: ""
	I1209 04:44:28.748516 1620518 logs.go:282] 0 containers: []
	W1209 04:44:28.748523 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:28.748531 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:28.748543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:28.763578 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:28.763594 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:28.830210 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:28.817342   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.818137   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819682   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.819980   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:28.823920   13896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:28.830220 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:28.830231 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:28.905378 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:28.905401 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:28.934445 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:28.934466 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.501091 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:31.511589 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:31.511662 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:31.537954 1620518 cri.go:89] found id: ""
	I1209 04:44:31.537967 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.537974 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:31.537979 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:31.538035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:31.563399 1620518 cri.go:89] found id: ""
	I1209 04:44:31.563412 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.563419 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:31.563424 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:31.563481 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:31.590727 1620518 cri.go:89] found id: ""
	I1209 04:44:31.590741 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.590748 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:31.590753 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:31.590817 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:31.619991 1620518 cri.go:89] found id: ""
	I1209 04:44:31.620004 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.620012 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:31.620017 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:31.620073 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:31.646682 1620518 cri.go:89] found id: ""
	I1209 04:44:31.646695 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.646703 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:31.646709 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:31.646783 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:31.676240 1620518 cri.go:89] found id: ""
	I1209 04:44:31.676254 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.676261 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:31.676266 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:31.676324 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:31.701874 1620518 cri.go:89] found id: ""
	I1209 04:44:31.701898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:31.701906 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:31.701914 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:31.701924 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:31.729913 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:31.729929 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:31.795202 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:31.795222 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:31.810455 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:31.810471 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:31.910056 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:31.901648   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.902306   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.903933   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.904418   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:31.906134   14015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:31.910067 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:31.910079 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.486956 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:34.497309 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:34.497372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:34.523236 1620518 cri.go:89] found id: ""
	I1209 04:44:34.523250 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.523257 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:34.523262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:34.523320 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:34.549906 1620518 cri.go:89] found id: ""
	I1209 04:44:34.549920 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.549935 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:34.549940 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:34.549997 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:34.577694 1620518 cri.go:89] found id: ""
	I1209 04:44:34.577708 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.577716 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:34.577721 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:34.577781 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:34.604297 1620518 cri.go:89] found id: ""
	I1209 04:44:34.604311 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.604319 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:34.604325 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:34.604388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:34.629233 1620518 cri.go:89] found id: ""
	I1209 04:44:34.629249 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.629257 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:34.629262 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:34.629330 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:34.659380 1620518 cri.go:89] found id: ""
	I1209 04:44:34.659394 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.659401 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:34.659407 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:34.659466 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:34.688342 1620518 cri.go:89] found id: ""
	I1209 04:44:34.688356 1620518 logs.go:282] 0 containers: []
	W1209 04:44:34.688363 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:34.688370 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:34.688383 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:34.703538 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:34.703555 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:34.766893 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:34.758520   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.759198   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.760746   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.761300   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:34.763031   14106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:34.766907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:34.766925 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:34.835016 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:34.835035 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:34.867468 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:34.867484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.441777 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:37.452150 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:37.452220 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:37.477442 1620518 cri.go:89] found id: ""
	I1209 04:44:37.477456 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.477463 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:37.477468 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:37.477525 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:37.503669 1620518 cri.go:89] found id: ""
	I1209 04:44:37.503683 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.503690 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:37.503696 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:37.503756 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:37.529304 1620518 cri.go:89] found id: ""
	I1209 04:44:37.529318 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.529326 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:37.529331 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:37.529388 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:37.555509 1620518 cri.go:89] found id: ""
	I1209 04:44:37.555523 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.555539 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:37.555545 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:37.555603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:37.581297 1620518 cri.go:89] found id: ""
	I1209 04:44:37.581310 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.581328 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:37.581334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:37.581403 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:37.607757 1620518 cri.go:89] found id: ""
	I1209 04:44:37.607774 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.607781 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:37.607787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:37.607863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:37.634135 1620518 cri.go:89] found id: ""
	I1209 04:44:37.634159 1620518 logs.go:282] 0 containers: []
	W1209 04:44:37.634167 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:37.634174 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:37.634187 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:37.698412 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:37.690495   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.691106   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.692656   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.693121   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:37.694648   14207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:37.698423 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:37.698434 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:37.765691 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:37.765711 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:37.794807 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:37.794822 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:37.865591 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:37.865609 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.382843 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:40.393026 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:40.393086 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:40.417900 1620518 cri.go:89] found id: ""
	I1209 04:44:40.417913 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.417920 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:40.417926 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:40.417984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:40.447221 1620518 cri.go:89] found id: ""
	I1209 04:44:40.447235 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.447242 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:40.447247 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:40.447305 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:40.472564 1620518 cri.go:89] found id: ""
	I1209 04:44:40.472578 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.472585 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:40.472591 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:40.472651 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:40.498097 1620518 cri.go:89] found id: ""
	I1209 04:44:40.498111 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.498118 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:40.498123 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:40.498182 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:40.523258 1620518 cri.go:89] found id: ""
	I1209 04:44:40.523271 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.523279 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:40.523287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:40.523343 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:40.548390 1620518 cri.go:89] found id: ""
	I1209 04:44:40.548404 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.548411 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:40.548417 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:40.548475 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:40.573171 1620518 cri.go:89] found id: ""
	I1209 04:44:40.573185 1620518 logs.go:282] 0 containers: []
	W1209 04:44:40.573192 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:40.573199 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:40.573211 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:40.587922 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:40.587937 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:40.648925 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:40.640617   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.641385   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643081   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.643670   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:40.645179   14317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:40.648934 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:40.648945 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:40.721024 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:40.721047 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:40.756647 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:40.756664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.325607 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:43.335615 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:43.335677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:43.365344 1620518 cri.go:89] found id: ""
	I1209 04:44:43.365360 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.365367 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:43.365373 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:43.365432 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:43.391751 1620518 cri.go:89] found id: ""
	I1209 04:44:43.391764 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.391772 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:43.391783 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:43.391843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:43.417345 1620518 cri.go:89] found id: ""
	I1209 04:44:43.417359 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.417366 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:43.417372 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:43.417433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:43.444314 1620518 cri.go:89] found id: ""
	I1209 04:44:43.444328 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.444335 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:43.444341 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:43.444402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:43.473635 1620518 cri.go:89] found id: ""
	I1209 04:44:43.473649 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.473656 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:43.473661 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:43.473721 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:43.499726 1620518 cri.go:89] found id: ""
	I1209 04:44:43.499740 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.499747 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:43.499752 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:43.499812 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:43.526373 1620518 cri.go:89] found id: ""
	I1209 04:44:43.526388 1620518 logs.go:282] 0 containers: []
	W1209 04:44:43.526396 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:43.526404 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:43.526415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:43.591625 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:43.591644 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:43.606802 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:43.606818 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:43.671535 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:43.662523   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.663221   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.664909   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.665492   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:43.667229   14423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:43.671545 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:43.671556 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:43.742830 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:43.742849 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.272131 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:46.282533 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:46.282611 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:46.307629 1620518 cri.go:89] found id: ""
	I1209 04:44:46.307644 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.307652 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:46.307657 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:46.307718 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:46.334241 1620518 cri.go:89] found id: ""
	I1209 04:44:46.334255 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.334262 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:46.334267 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:46.334326 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:46.360606 1620518 cri.go:89] found id: ""
	I1209 04:44:46.360619 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.360627 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:46.360632 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:46.360693 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:46.391930 1620518 cri.go:89] found id: ""
	I1209 04:44:46.391944 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.391951 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:46.391956 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:46.392018 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:46.418088 1620518 cri.go:89] found id: ""
	I1209 04:44:46.418102 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.418109 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:46.418114 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:46.418173 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:46.444114 1620518 cri.go:89] found id: ""
	I1209 04:44:46.444129 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.444135 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:46.444141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:46.444202 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:46.469066 1620518 cri.go:89] found id: ""
	I1209 04:44:46.469079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:46.469096 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:46.469105 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:46.469116 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:46.535118 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:46.526762   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.527187   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.528934   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.529451   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:46.531143   14524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:46.535128 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:46.535140 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:46.603490 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:46.603513 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:46.633565 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:46.633582 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:46.707757 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:46.707778 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.223668 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:49.233804 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:49.233863 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:49.262060 1620518 cri.go:89] found id: ""
	I1209 04:44:49.262074 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.262081 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:49.262087 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:49.262146 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:49.288289 1620518 cri.go:89] found id: ""
	I1209 04:44:49.288303 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.288310 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:49.288315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:49.288372 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:49.317469 1620518 cri.go:89] found id: ""
	I1209 04:44:49.317482 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.317489 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:49.317495 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:49.317553 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:49.343598 1620518 cri.go:89] found id: ""
	I1209 04:44:49.343612 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.343619 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:49.343624 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:49.343682 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:49.369884 1620518 cri.go:89] found id: ""
	I1209 04:44:49.369898 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.369905 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:49.369910 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:49.369968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:49.397485 1620518 cri.go:89] found id: ""
	I1209 04:44:49.397499 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.397506 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:49.397512 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:49.397576 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:49.426780 1620518 cri.go:89] found id: ""
	I1209 04:44:49.426794 1620518 logs.go:282] 0 containers: []
	W1209 04:44:49.426802 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:49.426810 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:49.426820 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:49.455508 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:49.455524 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:49.521613 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:49.521632 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:49.537098 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:49.537115 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:49.604403 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:49.595461   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.596171   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.597975   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.598557   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:49.600294   14642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:49.604415 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:49.604427 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.175474 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:52.185416 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:52.185490 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:52.210165 1620518 cri.go:89] found id: ""
	I1209 04:44:52.210179 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.210186 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:52.210191 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:52.210250 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:52.235252 1620518 cri.go:89] found id: ""
	I1209 04:44:52.235265 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.235272 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:52.235277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:52.235335 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:52.260814 1620518 cri.go:89] found id: ""
	I1209 04:44:52.260828 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.260835 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:52.260840 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:52.260899 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:52.287596 1620518 cri.go:89] found id: ""
	I1209 04:44:52.287609 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.287616 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:52.287621 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:52.287677 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:52.315049 1620518 cri.go:89] found id: ""
	I1209 04:44:52.315062 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.315069 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:52.315075 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:52.315139 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:52.339741 1620518 cri.go:89] found id: ""
	I1209 04:44:52.339755 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.339762 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:52.339767 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:52.339825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:52.369959 1620518 cri.go:89] found id: ""
	I1209 04:44:52.369973 1620518 logs.go:282] 0 containers: []
	W1209 04:44:52.369981 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:52.369988 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:52.369998 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:52.442787 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:52.434156   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.434984   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.436742   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.437458   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:52.439036   14730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:52.442797 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:52.442807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:52.511615 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:52.511634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:52.542801 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:52.542817 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:52.608882 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:52.608904 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.125120 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:55.135789 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:55.135848 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:55.162401 1620518 cri.go:89] found id: ""
	I1209 04:44:55.162416 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.162423 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:55.162428 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:55.162487 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:55.190716 1620518 cri.go:89] found id: ""
	I1209 04:44:55.190730 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.190736 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:55.190742 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:55.190799 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:55.216812 1620518 cri.go:89] found id: ""
	I1209 04:44:55.216825 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.216832 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:55.216839 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:55.216896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:55.241064 1620518 cri.go:89] found id: ""
	I1209 04:44:55.241079 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.241086 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:55.241092 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:55.241148 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:55.270237 1620518 cri.go:89] found id: ""
	I1209 04:44:55.270251 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.270258 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:55.270263 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:55.270322 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:55.296228 1620518 cri.go:89] found id: ""
	I1209 04:44:55.296242 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.296249 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:55.296254 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:55.296315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:55.322153 1620518 cri.go:89] found id: ""
	I1209 04:44:55.322167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:55.322174 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:55.322181 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:55.322192 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:55.390665 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:55.390684 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:55.405506 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:55.405523 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:55.471951 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:55.463255   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.463802   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.465674   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.466180   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:55.467961   14842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:44:55.471960 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:55.471972 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:55.542641 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:55.542662 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.078721 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:44:58.089961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:44:58.090029 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:44:58.117883 1620518 cri.go:89] found id: ""
	I1209 04:44:58.117896 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.117902 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:44:58.117908 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:44:58.117968 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:44:58.150212 1620518 cri.go:89] found id: ""
	I1209 04:44:58.150226 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.150233 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:44:58.150238 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:44:58.150296 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:44:58.177448 1620518 cri.go:89] found id: ""
	I1209 04:44:58.177462 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.177469 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:44:58.177474 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:44:58.177533 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:44:58.203663 1620518 cri.go:89] found id: ""
	I1209 04:44:58.203676 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.203683 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:44:58.203688 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:44:58.203779 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:44:58.229153 1620518 cri.go:89] found id: ""
	I1209 04:44:58.229167 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.229174 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:44:58.229179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:44:58.229237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:44:58.253337 1620518 cri.go:89] found id: ""
	I1209 04:44:58.253365 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.253372 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:44:58.253377 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:44:58.253433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:44:58.279202 1620518 cri.go:89] found id: ""
	I1209 04:44:58.279215 1620518 logs.go:282] 0 containers: []
	W1209 04:44:58.279222 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:44:58.279230 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:44:58.279240 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:44:58.352607 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:44:58.352626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:44:58.380559 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:44:58.380575 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:44:58.450340 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:44:58.450359 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:44:58.466733 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:44:58.466753 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:44:58.539538 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:44:58.531537   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.532132   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.533605   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.534107   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:44:58.535589   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.039807 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:01.051635 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:01.051699 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:01.081094 1620518 cri.go:89] found id: ""
	I1209 04:45:01.081120 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.081132 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:01.081138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:01.081216 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:01.110254 1620518 cri.go:89] found id: ""
	I1209 04:45:01.110270 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.110277 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:01.110282 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:01.110348 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:01.142200 1620518 cri.go:89] found id: ""
	I1209 04:45:01.142217 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.142224 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:01.142230 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:01.142295 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:01.173624 1620518 cri.go:89] found id: ""
	I1209 04:45:01.173640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.173647 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:01.173653 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:01.173714 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:01.200655 1620518 cri.go:89] found id: ""
	I1209 04:45:01.200669 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.200676 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:01.200681 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:01.200753 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:01.228245 1620518 cri.go:89] found id: ""
	I1209 04:45:01.228260 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.228268 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:01.228274 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:01.228344 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:01.255910 1620518 cri.go:89] found id: ""
	I1209 04:45:01.255924 1620518 logs.go:282] 0 containers: []
	W1209 04:45:01.255932 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:01.255941 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:01.255955 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:01.272811 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:01.272829 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:01.345905 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:01.336766   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.337312   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339248   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.339633   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:01.341414   15040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:01.345916 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:01.345926 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:01.428612 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:01.428634 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:01.462789 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:01.462805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.036441 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:04.048197 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:04.048263 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:04.077328 1620518 cri.go:89] found id: ""
	I1209 04:45:04.077347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.077354 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:04.077361 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:04.077424 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:04.105221 1620518 cri.go:89] found id: ""
	I1209 04:45:04.105235 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.105243 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:04.105249 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:04.105315 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:04.136847 1620518 cri.go:89] found id: ""
	I1209 04:45:04.136860 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.136868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:04.136873 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:04.136934 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:04.167906 1620518 cri.go:89] found id: ""
	I1209 04:45:04.167920 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.167930 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:04.167936 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:04.168012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:04.198111 1620518 cri.go:89] found id: ""
	I1209 04:45:04.198126 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.198133 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:04.198139 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:04.198201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:04.228375 1620518 cri.go:89] found id: ""
	I1209 04:45:04.228389 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.228396 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:04.228402 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:04.228460 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:04.255398 1620518 cri.go:89] found id: ""
	I1209 04:45:04.255411 1620518 logs.go:282] 0 containers: []
	W1209 04:45:04.255418 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:04.255425 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:04.255436 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:04.285882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:04.285898 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:04.352741 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:04.352763 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:04.369185 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:04.369202 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:04.440688 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:04.432150   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.432585   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434392   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.434973   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:04.436580   15163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:04.440698 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:04.440710 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.013764 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:07.024294 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:07.024356 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:07.050143 1620518 cri.go:89] found id: ""
	I1209 04:45:07.050157 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.050164 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:07.050170 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:07.050240 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:07.076876 1620518 cri.go:89] found id: ""
	I1209 04:45:07.076890 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.076897 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:07.076902 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:07.076957 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:07.102491 1620518 cri.go:89] found id: ""
	I1209 04:45:07.102505 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.102512 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:07.102517 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:07.102597 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:07.132406 1620518 cri.go:89] found id: ""
	I1209 04:45:07.132421 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.132428 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:07.132432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:07.132489 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:07.158308 1620518 cri.go:89] found id: ""
	I1209 04:45:07.158322 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.158329 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:07.158334 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:07.158394 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:07.185219 1620518 cri.go:89] found id: ""
	I1209 04:45:07.185232 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.185240 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:07.185245 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:07.185304 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:07.211200 1620518 cri.go:89] found id: ""
	I1209 04:45:07.211213 1620518 logs.go:282] 0 containers: []
	W1209 04:45:07.211220 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:07.211227 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:07.211239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:07.279098 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:07.279117 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:07.307654 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:07.307669 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:07.380382 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:07.380406 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:07.396198 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:07.396216 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:07.463840 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:07.455780   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.456634   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458306   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.458894   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:07.460163   15275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:09.964491 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:09.974856 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:09.974917 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:10.013610 1620518 cri.go:89] found id: ""
	I1209 04:45:10.013627 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.013635 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:10.013641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:10.013710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:10.041923 1620518 cri.go:89] found id: ""
	I1209 04:45:10.041937 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.041945 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:10.041950 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:10.042012 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:10.070273 1620518 cri.go:89] found id: ""
	I1209 04:45:10.070287 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.070295 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:10.070306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:10.070365 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:10.101336 1620518 cri.go:89] found id: ""
	I1209 04:45:10.101350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.101357 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:10.101362 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:10.101423 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:10.129685 1620518 cri.go:89] found id: ""
	I1209 04:45:10.129699 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.129706 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:10.129711 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:10.129770 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:10.157137 1620518 cri.go:89] found id: ""
	I1209 04:45:10.157151 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.157158 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:10.157164 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:10.157223 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:10.186869 1620518 cri.go:89] found id: ""
	I1209 04:45:10.186883 1620518 logs.go:282] 0 containers: []
	W1209 04:45:10.186891 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:10.186898 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:10.186912 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:10.217015 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:10.217032 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:10.284415 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:10.284437 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:10.299713 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:10.299729 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:10.383660 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:10.374562   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.375344   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.376918   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.377428   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:10.379505   15372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:10.383683 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:10.383695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:12.956212 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:12.967122 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:12.967187 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:12.992647 1620518 cri.go:89] found id: ""
	I1209 04:45:12.992661 1620518 logs.go:282] 0 containers: []
	W1209 04:45:12.992667 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:12.992673 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:12.992731 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:13.024601 1620518 cri.go:89] found id: ""
	I1209 04:45:13.024616 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.024623 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:13.024628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:13.024689 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:13.054508 1620518 cri.go:89] found id: ""
	I1209 04:45:13.054522 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.054529 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:13.054534 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:13.054612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:13.080662 1620518 cri.go:89] found id: ""
	I1209 04:45:13.080681 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.080688 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:13.080693 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:13.080750 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:13.112334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.112347 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.112354 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:13.112363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:13.112421 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:13.141334 1620518 cri.go:89] found id: ""
	I1209 04:45:13.141348 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.141355 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:13.141360 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:13.141433 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:13.166692 1620518 cri.go:89] found id: ""
	I1209 04:45:13.166706 1620518 logs.go:282] 0 containers: []
	W1209 04:45:13.166713 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:13.166721 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:13.166735 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:13.230693 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:13.221480   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.222331   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224060   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.224679   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:13.226481   15460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:13.230703 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:13.230718 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:13.299665 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:13.299685 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:13.343575 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:13.343591 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:13.418530 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:13.418550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:15.934049 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:15.944397 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:15.944459 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:15.969801 1620518 cri.go:89] found id: ""
	I1209 04:45:15.969814 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.969821 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:15.969827 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:15.969886 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:15.995679 1620518 cri.go:89] found id: ""
	I1209 04:45:15.995693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:15.995700 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:15.995705 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:15.995761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:16.029078 1620518 cri.go:89] found id: ""
	I1209 04:45:16.029092 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.029100 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:16.029105 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:16.029167 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:16.057686 1620518 cri.go:89] found id: ""
	I1209 04:45:16.057700 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.057707 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:16.057712 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:16.057773 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:16.085790 1620518 cri.go:89] found id: ""
	I1209 04:45:16.085804 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.085811 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:16.085816 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:16.085876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:16.112272 1620518 cri.go:89] found id: ""
	I1209 04:45:16.112288 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.112295 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:16.112301 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:16.112371 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:16.137697 1620518 cri.go:89] found id: ""
	I1209 04:45:16.137711 1620518 logs.go:282] 0 containers: []
	W1209 04:45:16.137718 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:16.137726 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:16.137741 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:16.170480 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:16.170495 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:16.235651 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:16.235671 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:16.250648 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:16.250664 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:16.313079 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:16.304999   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.305695   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307368   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.307905   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:16.309411   15583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:16.313088 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:16.313099 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:18.888938 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:18.899614 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:18.899678 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:18.925762 1620518 cri.go:89] found id: ""
	I1209 04:45:18.925775 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.925782 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:18.925787 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:18.925843 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:18.952615 1620518 cri.go:89] found id: ""
	I1209 04:45:18.952629 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.952636 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:18.952641 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:18.952703 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:18.978511 1620518 cri.go:89] found id: ""
	I1209 04:45:18.978525 1620518 logs.go:282] 0 containers: []
	W1209 04:45:18.978532 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:18.978537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:18.978620 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:19.007151 1620518 cri.go:89] found id: ""
	I1209 04:45:19.007166 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.007173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:19.007183 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:19.007244 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:19.034621 1620518 cri.go:89] found id: ""
	I1209 04:45:19.034635 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.034643 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:19.034648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:19.034708 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:19.063843 1620518 cri.go:89] found id: ""
	I1209 04:45:19.063856 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.063863 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:19.063868 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:19.063929 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:19.090085 1620518 cri.go:89] found id: ""
	I1209 04:45:19.090099 1620518 logs.go:282] 0 containers: []
	W1209 04:45:19.090106 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:19.090114 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:19.090125 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:19.159590 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:19.150395   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.151167   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.152762   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.153413   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:19.155202   15671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:19.159614 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:19.159626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:19.228469 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:19.228489 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:19.257518 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:19.257534 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:19.323776 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:19.323796 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:21.846133 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:21.856537 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:21.856603 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:21.883050 1620518 cri.go:89] found id: ""
	I1209 04:45:21.883071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.883079 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:21.883084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:21.883144 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:21.909529 1620518 cri.go:89] found id: ""
	I1209 04:45:21.909544 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.909551 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:21.909557 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:21.909616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:21.935426 1620518 cri.go:89] found id: ""
	I1209 04:45:21.935440 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.935447 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:21.935452 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:21.935513 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:21.964269 1620518 cri.go:89] found id: ""
	I1209 04:45:21.964283 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.964290 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:21.964295 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:21.964351 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:21.991621 1620518 cri.go:89] found id: ""
	I1209 04:45:21.991637 1620518 logs.go:282] 0 containers: []
	W1209 04:45:21.991644 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:21.991650 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:21.991710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:22.018422 1620518 cri.go:89] found id: ""
	I1209 04:45:22.018437 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.018445 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:22.018450 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:22.018510 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:22.045499 1620518 cri.go:89] found id: ""
	I1209 04:45:22.045514 1620518 logs.go:282] 0 containers: []
	W1209 04:45:22.045522 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:22.045529 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:22.045541 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:22.111892 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:22.103280   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.104064   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.105650   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.106182   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:22.107773   15779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:22.111907 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:22.111923 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:22.180045 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:22.180065 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:22.210199 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:22.210215 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:22.276418 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:22.276439 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:24.791989 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:24.802138 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:24.802199 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:24.830421 1620518 cri.go:89] found id: ""
	I1209 04:45:24.830434 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.830441 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:24.830446 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:24.830509 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:24.855641 1620518 cri.go:89] found id: ""
	I1209 04:45:24.855653 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.855661 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:24.855666 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:24.855723 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:24.882261 1620518 cri.go:89] found id: ""
	I1209 04:45:24.882275 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.882282 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:24.882287 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:24.882346 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:24.909451 1620518 cri.go:89] found id: ""
	I1209 04:45:24.909465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.909472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:24.909477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:24.909538 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:24.935023 1620518 cri.go:89] found id: ""
	I1209 04:45:24.935036 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.935043 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:24.935048 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:24.935105 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:24.965362 1620518 cri.go:89] found id: ""
	I1209 04:45:24.965375 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.965390 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:24.965396 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:24.965454 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:24.993349 1620518 cri.go:89] found id: ""
	I1209 04:45:24.993362 1620518 logs.go:282] 0 containers: []
	W1209 04:45:24.993369 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:24.993377 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:24.993387 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:25.060817 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:25.060841 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:25.077397 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:25.077415 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:25.149136 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:25.140893   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.141508   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.142608   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.143318   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:25.145008   15887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:25.149146 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:25.149157 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:25.218866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:25.218886 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:27.749537 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:27.760277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:27.760345 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:27.786623 1620518 cri.go:89] found id: ""
	I1209 04:45:27.786636 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.786643 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:27.786648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:27.786705 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:27.813156 1620518 cri.go:89] found id: ""
	I1209 04:45:27.813169 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.813176 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:27.813181 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:27.813238 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:27.838803 1620518 cri.go:89] found id: ""
	I1209 04:45:27.838817 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.838824 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:27.838835 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:27.838896 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:27.865975 1620518 cri.go:89] found id: ""
	I1209 04:45:27.865988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.865996 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:27.866001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:27.866058 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:27.891739 1620518 cri.go:89] found id: ""
	I1209 04:45:27.891753 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.891761 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:27.891766 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:27.891825 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:27.922057 1620518 cri.go:89] found id: ""
	I1209 04:45:27.922071 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.922079 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:27.922084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:27.922143 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:27.947345 1620518 cri.go:89] found id: ""
	I1209 04:45:27.947359 1620518 logs.go:282] 0 containers: []
	W1209 04:45:27.947366 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:27.947373 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:27.947384 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:28.018760 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:28.018788 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:28.035483 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:28.035508 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:28.104231 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:28.095397   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.096215   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.097976   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.098564   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:28.100134   15994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:28.104241 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:28.104253 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:28.173176 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:28.173196 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:30.707635 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:30.717972 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:30.718036 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:30.743336 1620518 cri.go:89] found id: ""
	I1209 04:45:30.743350 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.743357 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:30.743363 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:30.743420 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:30.768727 1620518 cri.go:89] found id: ""
	I1209 04:45:30.768741 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.768748 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:30.768754 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:30.768811 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:30.797959 1620518 cri.go:89] found id: ""
	I1209 04:45:30.797973 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.797980 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:30.797985 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:30.798046 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:30.825422 1620518 cri.go:89] found id: ""
	I1209 04:45:30.825435 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.825442 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:30.825448 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:30.825506 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:30.854265 1620518 cri.go:89] found id: ""
	I1209 04:45:30.854278 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.854285 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:30.854290 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:30.854347 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:30.880403 1620518 cri.go:89] found id: ""
	I1209 04:45:30.880418 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.880426 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:30.880432 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:30.880494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:30.913767 1620518 cri.go:89] found id: ""
	I1209 04:45:30.913781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:30.913789 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:30.913796 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:30.913807 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:30.980378 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:30.980398 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:30.995822 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:30.995838 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:31.066169 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:31.058055   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.058662   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060209   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.060692   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:31.062141   16098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:31.066179 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:31.066190 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:31.138123 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:31.138142 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:33.670737 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:33.681036 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:33.681099 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:33.709926 1620518 cri.go:89] found id: ""
	I1209 04:45:33.709939 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.709947 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:33.709963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:33.710023 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:33.737554 1620518 cri.go:89] found id: ""
	I1209 04:45:33.737567 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.737574 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:33.737579 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:33.737640 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:33.763709 1620518 cri.go:89] found id: ""
	I1209 04:45:33.763723 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.763731 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:33.763736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:33.763794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:33.792885 1620518 cri.go:89] found id: ""
	I1209 04:45:33.792899 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.792906 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:33.792912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:33.792971 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:33.818657 1620518 cri.go:89] found id: ""
	I1209 04:45:33.818671 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.818678 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:33.818683 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:33.818741 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:33.845152 1620518 cri.go:89] found id: ""
	I1209 04:45:33.845167 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.845174 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:33.845179 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:33.845237 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:33.871504 1620518 cri.go:89] found id: ""
	I1209 04:45:33.871517 1620518 logs.go:282] 0 containers: []
	W1209 04:45:33.871524 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:33.871532 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:33.871543 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:33.938353 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:33.938373 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:33.954248 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:33.954267 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:34.025014 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:34.015102   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016063   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.016884   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.018662   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:34.019422   16202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:34.025026 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:34.025038 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:34.096006 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:34.096027 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:36.630302 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:36.640925 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:36.640999 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:36.669961 1620518 cri.go:89] found id: ""
	I1209 04:45:36.669975 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.669982 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:36.669988 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:36.670044 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:36.696918 1620518 cri.go:89] found id: ""
	I1209 04:45:36.696934 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.696942 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:36.696947 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:36.697007 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:36.727113 1620518 cri.go:89] found id: ""
	I1209 04:45:36.727127 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.727136 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:36.727141 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:36.727201 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:36.752459 1620518 cri.go:89] found id: ""
	I1209 04:45:36.752473 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.752480 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:36.752485 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:36.752543 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:36.778403 1620518 cri.go:89] found id: ""
	I1209 04:45:36.778417 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.778425 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:36.778430 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:36.778488 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:36.809409 1620518 cri.go:89] found id: ""
	I1209 04:45:36.809423 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.809430 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:36.809436 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:36.809494 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:36.838444 1620518 cri.go:89] found id: ""
	I1209 04:45:36.838457 1620518 logs.go:282] 0 containers: []
	W1209 04:45:36.838464 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:36.838472 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:36.838484 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:36.853995 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:36.854011 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:36.919371 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:36.909708   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.910442   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912223   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.912779   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:36.914634   16305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:36.919381 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:36.919395 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:36.992004 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:36.992025 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:37.033214 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:37.033230 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.602680 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:39.614476 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:39.614537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:39.644626 1620518 cri.go:89] found id: ""
	I1209 04:45:39.644640 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.644647 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:39.644652 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:39.644711 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:39.673317 1620518 cri.go:89] found id: ""
	I1209 04:45:39.673331 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.673338 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:39.673343 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:39.673404 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:39.699053 1620518 cri.go:89] found id: ""
	I1209 04:45:39.699067 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.699074 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:39.699079 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:39.699141 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:39.724341 1620518 cri.go:89] found id: ""
	I1209 04:45:39.724355 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.724362 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:39.724370 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:39.724429 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:39.749975 1620518 cri.go:89] found id: ""
	I1209 04:45:39.749988 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.749995 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:39.750001 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:39.750060 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:39.774556 1620518 cri.go:89] found id: ""
	I1209 04:45:39.774588 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.774597 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:39.774602 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:39.774663 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:39.800285 1620518 cri.go:89] found id: ""
	I1209 04:45:39.800299 1620518 logs.go:282] 0 containers: []
	W1209 04:45:39.800307 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:39.800314 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:39.800325 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:39.830073 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:39.830089 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:39.898438 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:39.898457 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:39.913743 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:39.913759 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:39.982308 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:39.974192   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.974938   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.976740   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.977237   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:39.978358   16422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:39.982319 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:39.982332 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.561378 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:42.571315 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:42.571383 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:42.602452 1620518 cri.go:89] found id: ""
	I1209 04:45:42.602466 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.602473 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:42.602478 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:42.602541 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:42.634016 1620518 cri.go:89] found id: ""
	I1209 04:45:42.634029 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.634037 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:42.634042 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:42.634102 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:42.665601 1620518 cri.go:89] found id: ""
	I1209 04:45:42.665614 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.665621 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:42.665627 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:42.665683 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:42.692605 1620518 cri.go:89] found id: ""
	I1209 04:45:42.692618 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.692626 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:42.692631 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:42.692692 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:42.719572 1620518 cri.go:89] found id: ""
	I1209 04:45:42.719585 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.719592 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:42.719598 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:42.719660 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:42.745298 1620518 cri.go:89] found id: ""
	I1209 04:45:42.745312 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.745319 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:42.745324 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:42.745391 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:42.770685 1620518 cri.go:89] found id: ""
	I1209 04:45:42.770698 1620518 logs.go:282] 0 containers: []
	W1209 04:45:42.770706 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:42.770714 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:42.770724 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:42.840866 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:42.840888 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:42.871659 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:42.871676 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:42.941154 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:42.941174 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:42.956621 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:42.956638 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:43.026115 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:43.016607   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.017380   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.019274   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.020072   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:43.021739   16527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.527782 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:45.537648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:45.537707 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:45.564248 1620518 cri.go:89] found id: ""
	I1209 04:45:45.564263 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.564270 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:45.564277 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:45.564337 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:45.599479 1620518 cri.go:89] found id: ""
	I1209 04:45:45.599492 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.599499 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:45.599504 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:45.599560 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:45.629541 1620518 cri.go:89] found id: ""
	I1209 04:45:45.629554 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.629563 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:45.629568 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:45.629624 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:45.660451 1620518 cri.go:89] found id: ""
	I1209 04:45:45.660465 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.660472 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:45.660477 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:45.660537 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:45.686489 1620518 cri.go:89] found id: ""
	I1209 04:45:45.686503 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.686509 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:45.686514 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:45.686616 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:45.711940 1620518 cri.go:89] found id: ""
	I1209 04:45:45.711954 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.711961 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:45.711967 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:45.712025 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:45.737703 1620518 cri.go:89] found id: ""
	I1209 04:45:45.737717 1620518 logs.go:282] 0 containers: []
	W1209 04:45:45.737724 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:45.737732 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:45.737745 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:45.802439 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:45.793968   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.794602   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796316   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.796982   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:45.798503   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:45.802451 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:45.802474 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:45.871530 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:45.871550 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:45.901994 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:45.902010 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:45.973222 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:45.973241 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.488532 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:48.499003 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:48.499072 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:48.524749 1620518 cri.go:89] found id: ""
	I1209 04:45:48.524762 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.524769 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:48.524774 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:48.524830 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:48.553895 1620518 cri.go:89] found id: ""
	I1209 04:45:48.553909 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.553917 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:48.553922 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:48.553984 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:48.581047 1620518 cri.go:89] found id: ""
	I1209 04:45:48.581069 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.581078 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:48.581084 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:48.581153 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:48.614680 1620518 cri.go:89] found id: ""
	I1209 04:45:48.614693 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.614701 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:48.614706 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:48.614774 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:48.643818 1620518 cri.go:89] found id: ""
	I1209 04:45:48.643832 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.643839 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:48.643845 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:48.643919 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:48.669618 1620518 cri.go:89] found id: ""
	I1209 04:45:48.669632 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.669642 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:48.669647 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:48.669710 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:48.699049 1620518 cri.go:89] found id: ""
	I1209 04:45:48.699063 1620518 logs.go:282] 0 containers: []
	W1209 04:45:48.699070 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:48.699077 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:48.699088 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:48.731315 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:48.731331 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:48.798219 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:48.798239 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:48.813603 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:48.813620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:48.877674 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:48.869445   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.870317   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.871899   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.872215   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:48.873716   16732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:48.877684 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:48.877695 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.447558 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:51.457634 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:51.457694 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:51.487281 1620518 cri.go:89] found id: ""
	I1209 04:45:51.487294 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.487301 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:51.487306 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:51.487364 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:51.518737 1620518 cri.go:89] found id: ""
	I1209 04:45:51.518751 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.518758 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:51.518763 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:51.518837 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:51.544469 1620518 cri.go:89] found id: ""
	I1209 04:45:51.544481 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.544488 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:51.544493 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:51.544549 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:51.569588 1620518 cri.go:89] found id: ""
	I1209 04:45:51.569602 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.569624 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:51.569628 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:51.569687 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:51.612979 1620518 cri.go:89] found id: ""
	I1209 04:45:51.612992 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.612999 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:51.613004 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:51.613062 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:51.646866 1620518 cri.go:89] found id: ""
	I1209 04:45:51.646880 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.646886 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:51.646892 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:51.646954 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:51.672767 1620518 cri.go:89] found id: ""
	I1209 04:45:51.672781 1620518 logs.go:282] 0 containers: []
	W1209 04:45:51.672788 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:51.672795 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:51.672805 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:51.738601 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:51.738620 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:51.753536 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:51.753553 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:51.823113 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:51.814616   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.815237   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.816978   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.817576   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:51.819130   16823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:51.823124 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:51.823134 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:51.895060 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:51.895078 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.424057 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:54.434546 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:54.434637 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:54.461148 1620518 cri.go:89] found id: ""
	I1209 04:45:54.461161 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.461179 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:54.461185 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:54.461245 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:54.491296 1620518 cri.go:89] found id: ""
	I1209 04:45:54.491310 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.491316 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:54.491322 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:54.491377 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:54.517141 1620518 cri.go:89] found id: ""
	I1209 04:45:54.517155 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.517162 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:54.517168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:54.517228 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:54.543226 1620518 cri.go:89] found id: ""
	I1209 04:45:54.543245 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.543252 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:54.543258 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:54.543318 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:54.574984 1620518 cri.go:89] found id: ""
	I1209 04:45:54.574998 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.575005 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:54.575010 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:54.575069 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:54.612321 1620518 cri.go:89] found id: ""
	I1209 04:45:54.612335 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.612342 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:54.612347 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:54.612405 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:54.639817 1620518 cri.go:89] found id: ""
	I1209 04:45:54.639831 1620518 logs.go:282] 0 containers: []
	W1209 04:45:54.639839 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:54.639847 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:54.639858 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:54.704579 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:54.696022   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.696791   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.698435   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.699124   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:54.700720   16926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:54.704588 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:54.704610 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:45:54.772943 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:54.772962 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:54.802082 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:54.802097 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:54.873250 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:54.873278 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.389092 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:45:57.399566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:45:57.399631 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:45:57.424671 1620518 cri.go:89] found id: ""
	I1209 04:45:57.424685 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.424692 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:45:57.424698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:45:57.424755 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:45:57.449520 1620518 cri.go:89] found id: ""
	I1209 04:45:57.449533 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.449549 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:45:57.449554 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:45:57.449612 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:45:57.474934 1620518 cri.go:89] found id: ""
	I1209 04:45:57.474949 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.474956 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:45:57.474961 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:45:57.475017 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:45:57.504272 1620518 cri.go:89] found id: ""
	I1209 04:45:57.504285 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.504292 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:45:57.504297 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:45:57.504355 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:45:57.530784 1620518 cri.go:89] found id: ""
	I1209 04:45:57.530797 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.530804 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:45:57.530820 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:45:57.530878 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:45:57.556189 1620518 cri.go:89] found id: ""
	I1209 04:45:57.556202 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.556209 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:45:57.556214 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:45:57.556271 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:45:57.584245 1620518 cri.go:89] found id: ""
	I1209 04:45:57.584258 1620518 logs.go:282] 0 containers: []
	W1209 04:45:57.584266 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:45:57.584273 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:45:57.584286 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:45:57.618235 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:45:57.618250 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:45:57.693384 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:45:57.693403 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:45:57.708210 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:45:57.708227 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:45:57.773409 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:45:57.765285   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.766046   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.767558   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.768018   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:45:57.769496   17046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:45:57.773420 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:45:57.773430 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:00.342809 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:00.358795 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:00.358876 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:00.400877 1620518 cri.go:89] found id: ""
	I1209 04:46:00.400892 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.400900 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:00.400906 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:00.400970 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:00.431798 1620518 cri.go:89] found id: ""
	I1209 04:46:00.431813 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.431820 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:00.431828 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:00.431892 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:00.460666 1620518 cri.go:89] found id: ""
	I1209 04:46:00.460686 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.460693 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:00.460698 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:00.460761 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:00.488457 1620518 cri.go:89] found id: ""
	I1209 04:46:00.488471 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.488479 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:00.488484 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:00.488551 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:00.517784 1620518 cri.go:89] found id: ""
	I1209 04:46:00.517797 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.517805 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:00.517810 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:00.517873 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:00.545946 1620518 cri.go:89] found id: ""
	I1209 04:46:00.545960 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.545968 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:00.545973 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:00.546035 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:00.575131 1620518 cri.go:89] found id: ""
	I1209 04:46:00.575153 1620518 logs.go:282] 0 containers: []
	W1209 04:46:00.575161 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:00.575168 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:00.575179 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:00.612360 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:00.612379 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:00.689205 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:00.689224 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:00.704596 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:00.704612 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:00.770156 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:00.762022   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.762546   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764120   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.764452   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:00.765962   17152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:00.770165 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:00.770175 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.338719 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:03.349336 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:03.349402 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:03.374937 1620518 cri.go:89] found id: ""
	I1209 04:46:03.374950 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.374957 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:03.374963 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:03.375022 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:03.405176 1620518 cri.go:89] found id: ""
	I1209 04:46:03.405206 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.405213 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:03.405219 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:03.405285 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:03.434836 1620518 cri.go:89] found id: ""
	I1209 04:46:03.434860 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.434868 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:03.434874 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:03.434948 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:03.464055 1620518 cri.go:89] found id: ""
	I1209 04:46:03.464077 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.464085 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:03.464090 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:03.464189 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:03.493083 1620518 cri.go:89] found id: ""
	I1209 04:46:03.493106 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.493114 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:03.493119 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:03.493194 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:03.518929 1620518 cri.go:89] found id: ""
	I1209 04:46:03.518942 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.518950 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:03.518955 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:03.519016 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:03.543738 1620518 cri.go:89] found id: ""
	I1209 04:46:03.543751 1620518 logs.go:282] 0 containers: []
	W1209 04:46:03.543758 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:03.543766 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:03.543776 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:03.611972 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:03.611992 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:03.644882 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:03.644905 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:03.715853 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:03.715873 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:03.730852 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:03.730870 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:03.797963 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:03.789266   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.790005   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.791740   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.792349   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:03.794037   17259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.299034 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:06.310369 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:46:06.310430 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:46:06.338011 1620518 cri.go:89] found id: ""
	I1209 04:46:06.338024 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.338031 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:46:06.338037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:46:06.338093 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:46:06.364537 1620518 cri.go:89] found id: ""
	I1209 04:46:06.364551 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.364558 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:46:06.364566 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:46:06.364621 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:46:06.390874 1620518 cri.go:89] found id: ""
	I1209 04:46:06.390894 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.390907 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:46:06.390912 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:46:06.390972 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:46:06.416068 1620518 cri.go:89] found id: ""
	I1209 04:46:06.416082 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.416088 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:46:06.416093 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:46:06.416152 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:46:06.445711 1620518 cri.go:89] found id: ""
	I1209 04:46:06.445724 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.445731 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:46:06.445736 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:46:06.445794 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:46:06.472619 1620518 cri.go:89] found id: ""
	I1209 04:46:06.472632 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.472639 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:46:06.472644 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:46:06.472704 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:46:06.501335 1620518 cri.go:89] found id: ""
	I1209 04:46:06.501348 1620518 logs.go:282] 0 containers: []
	W1209 04:46:06.501355 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:46:06.501372 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:46:06.501382 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:46:06.564989 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:46:06.556947   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.557432   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559150   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.559456   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:46:06.560989   17340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:46:06.564998 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:46:06.565009 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 04:46:06.636608 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:46:06.636626 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:46:06.667969 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:46:06.667986 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:46:06.734125 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:46:06.734145 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:46:09.249456 1620518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 04:46:09.259765 1620518 kubeadm.go:602] duration metric: took 4m2.693827645s to restartPrimaryControlPlane
	W1209 04:46:09.259826 1620518 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 04:46:09.259905 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:46:09.672351 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:46:09.685870 1620518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:46:09.693855 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:46:09.693913 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:46:09.701686 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:46:09.701697 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:46:09.701750 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:46:09.709486 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:46:09.709542 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:46:09.717080 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:46:09.724681 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:46:09.724735 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:46:09.732335 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.740201 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:46:09.740255 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:46:09.747717 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:46:09.755316 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:46:09.755370 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:46:09.762723 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:46:09.800341 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:46:09.800668 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:46:09.867665 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:46:09.867727 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:46:09.867766 1620518 kubeadm.go:319] OS: Linux
	I1209 04:46:09.867807 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:46:09.867852 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:46:09.867896 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:46:09.867942 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:46:09.867987 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:46:09.868032 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:46:09.868074 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:46:09.868120 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:46:09.868162 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:46:09.937281 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:46:09.937384 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:46:09.937481 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:46:09.947317 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:46:09.952721 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:46:09.952808 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:46:09.952877 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:46:09.952958 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:46:09.953021 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:46:09.953092 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:46:09.953141 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:46:09.953206 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:46:09.953269 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:46:09.953343 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:46:09.953417 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:46:09.953461 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:46:09.953513 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:46:10.029245 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:46:10.224354 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:46:10.667691 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:46:10.882600 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:46:11.073140 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:46:11.073694 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:46:11.076408 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:46:11.079859 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:46:11.079965 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:46:11.080042 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:46:11.080114 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:46:11.095853 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:46:11.095951 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:46:11.104994 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:46:11.105485 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:46:11.105715 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:46:11.236975 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:46:11.237088 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:50:11.237231 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000344141s
	I1209 04:50:11.237256 1620518 kubeadm.go:319] 
	I1209 04:50:11.237309 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:50:11.237340 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:50:11.237438 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:50:11.237443 1620518 kubeadm.go:319] 
	I1209 04:50:11.237541 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:50:11.237571 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:50:11.237600 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:50:11.237603 1620518 kubeadm.go:319] 
	I1209 04:50:11.241458 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:50:11.241910 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:50:11.242023 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:50:11.242266 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:50:11.242272 1620518 kubeadm.go:319] 
	I1209 04:50:11.242336 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 04:50:11.242454 1620518 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000344141s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 04:50:11.242544 1620518 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 04:50:11.655787 1620518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 04:50:11.668676 1620518 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:50:11.668730 1620518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:50:11.676546 1620518 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:50:11.676562 1620518 kubeadm.go:158] found existing configuration files:
	
	I1209 04:50:11.676612 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1209 04:50:11.684172 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:50:11.684236 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:50:11.691594 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1209 04:50:11.699302 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:50:11.699363 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:50:11.706772 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.714846 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:50:11.714902 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:50:11.722267 1620518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1209 04:50:11.730186 1620518 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:50:11.730250 1620518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:50:11.738143 1620518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:50:11.781074 1620518 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 04:50:11.781123 1620518 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:50:11.856141 1620518 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:50:11.856206 1620518 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:50:11.856240 1620518 kubeadm.go:319] OS: Linux
	I1209 04:50:11.856283 1620518 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:50:11.856330 1620518 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:50:11.856377 1620518 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:50:11.856424 1620518 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:50:11.856471 1620518 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:50:11.856522 1620518 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:50:11.856566 1620518 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:50:11.856614 1620518 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:50:11.856660 1620518 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:50:11.927746 1620518 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:50:11.927875 1620518 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:50:11.927971 1620518 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:50:11.934983 1620518 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:50:11.938507 1620518 out.go:252]   - Generating certificates and keys ...
	I1209 04:50:11.938697 1620518 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:50:11.938772 1620518 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:50:11.938867 1620518 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 04:50:11.938937 1620518 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 04:50:11.939018 1620518 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 04:50:11.939071 1620518 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 04:50:11.939143 1620518 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 04:50:11.939213 1620518 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 04:50:11.939302 1620518 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 04:50:11.939383 1620518 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 04:50:11.939690 1620518 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 04:50:11.939748 1620518 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:50:12.353584 1620518 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:50:12.812738 1620518 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:50:13.265058 1620518 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:50:13.417250 1620518 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:50:13.472548 1620518 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:50:13.473076 1620518 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:50:13.475724 1620518 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:50:13.478920 1620518 out.go:252]   - Booting up control plane ...
	I1209 04:50:13.479026 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:50:13.479104 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:50:13.479930 1620518 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:50:13.496348 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:50:13.496458 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:50:13.504378 1620518 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:50:13.504655 1620518 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:50:13.504696 1620518 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:50:13.630713 1620518 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:50:13.630826 1620518 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:54:13.630972 1620518 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000259173s
	I1209 04:54:13.630997 1620518 kubeadm.go:319] 
	I1209 04:54:13.631053 1620518 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 04:54:13.631086 1620518 kubeadm.go:319] 	- The kubelet is not running
	I1209 04:54:13.631200 1620518 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 04:54:13.631206 1620518 kubeadm.go:319] 
	I1209 04:54:13.631310 1620518 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 04:54:13.631395 1620518 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 04:54:13.631461 1620518 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 04:54:13.631466 1620518 kubeadm.go:319] 
	I1209 04:54:13.635649 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:54:13.636127 1620518 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 04:54:13.636242 1620518 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:54:13.636479 1620518 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 04:54:13.636485 1620518 kubeadm.go:319] 
	I1209 04:54:13.636553 1620518 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 04:54:13.636616 1620518 kubeadm.go:403] duration metric: took 12m7.110467735s to StartCluster
	I1209 04:54:13.636648 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 04:54:13.636715 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 04:54:13.662011 1620518 cri.go:89] found id: ""
	I1209 04:54:13.662024 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.662032 1620518 logs.go:284] No container was found matching "kube-apiserver"
	I1209 04:54:13.662037 1620518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 04:54:13.662094 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 04:54:13.688278 1620518 cri.go:89] found id: ""
	I1209 04:54:13.688293 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.688299 1620518 logs.go:284] No container was found matching "etcd"
	I1209 04:54:13.688304 1620518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 04:54:13.688363 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 04:54:13.714700 1620518 cri.go:89] found id: ""
	I1209 04:54:13.714715 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.714723 1620518 logs.go:284] No container was found matching "coredns"
	I1209 04:54:13.714729 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 04:54:13.714795 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 04:54:13.740152 1620518 cri.go:89] found id: ""
	I1209 04:54:13.740166 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.740173 1620518 logs.go:284] No container was found matching "kube-scheduler"
	I1209 04:54:13.740178 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 04:54:13.740235 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 04:54:13.766214 1620518 cri.go:89] found id: ""
	I1209 04:54:13.766227 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.766235 1620518 logs.go:284] No container was found matching "kube-proxy"
	I1209 04:54:13.766240 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 04:54:13.766300 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 04:54:13.793141 1620518 cri.go:89] found id: ""
	I1209 04:54:13.793155 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.793162 1620518 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 04:54:13.793168 1620518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 04:54:13.793225 1620518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 04:54:13.824264 1620518 cri.go:89] found id: ""
	I1209 04:54:13.824278 1620518 logs.go:282] 0 containers: []
	W1209 04:54:13.824286 1620518 logs.go:284] No container was found matching "kindnet"
	I1209 04:54:13.824294 1620518 logs.go:123] Gathering logs for container status ...
	I1209 04:54:13.824305 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 04:54:13.865509 1620518 logs.go:123] Gathering logs for kubelet ...
	I1209 04:54:13.865527 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 04:54:13.944055 1620518 logs.go:123] Gathering logs for dmesg ...
	I1209 04:54:13.944075 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 04:54:13.960571 1620518 logs.go:123] Gathering logs for describe nodes ...
	I1209 04:54:13.960593 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 04:54:14.028160 1620518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1209 04:54:14.019001   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.019792   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021489   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.021862   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:54:14.023410   21174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 04:54:14.028170 1620518 logs.go:123] Gathering logs for CRI-O ...
	I1209 04:54:14.028180 1620518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1209 04:54:14.099915 1620518 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 04:54:14.099962 1620518 out.go:285] * 
	W1209 04:54:14.100108 1620518 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.100197 1620518 out.go:285] * 
	W1209 04:54:14.102317 1620518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 04:54:14.107888 1620518 out.go:203] 
	W1209 04:54:14.111655 1620518 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000259173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 04:54:14.111892 1620518 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 04:54:14.111932 1620518 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 04:54:14.116964 1620518 out.go:203] 
	
	
	==> CRI-O <==
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927580587Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927620637Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927668178Z" level=info msg="Create NRI interface"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927758033Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927766493Z" level=info msg="runtime interface created"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927780007Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927786308Z" level=info msg="runtime interface starting up..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927792741Z" level=info msg="starting plugins..."
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927805771Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 04:42:04 functional-331811 crio[9992]: time="2025-12-09T04:42:04.927872323Z" level=info msg="No systemd watchdog enabled"
	Dec 09 04:42:04 functional-331811 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.942951614Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d42015e0-8a7e-47f7-95a2-398ea8aa48f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.943749037Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=554d2336-7df0-4ab3-87a2-3f0040c79a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944291229Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=70fb14c4-f971-4387-8e1b-10c98c4791aa name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.944730675Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=36db540a-ff25-4b5c-b7d7-cd7322fbd4bb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945138629Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=7427d70a-8db2-44c3-88f8-0607ec671ff6 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.945576229Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b63b04fd-62c4-4cf0-9b5b-23eef2eb12c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:46:09 functional-331811 crio[9992]: time="2025-12-09T04:46:09.946074564Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=287329f7-949c-4b5b-8433-0437004398fd name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:56:13.702459   22682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:13.703155   22682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:13.704850   22682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:13.705518   22682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:13.707292   22682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:56:13 up  9:38,  0 user,  load average: 0.44, 0.24, 0.42
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:56:11 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:11 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1118.
	Dec 09 04:56:11 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:11 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:11 functional-331811 kubelet[22572]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:11 functional-331811 kubelet[22572]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:11 functional-331811 kubelet[22572]: E1209 04:56:11.869892   22572 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:11 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:11 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:12 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1119.
	Dec 09 04:56:12 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:12 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:12 functional-331811 kubelet[22578]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:12 functional-331811 kubelet[22578]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:12 functional-331811 kubelet[22578]: E1209 04:56:12.639755   22578 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:12 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:12 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:13 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1120.
	Dec 09 04:56:13 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:13 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:13 functional-331811 kubelet[22601]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:13 functional-331811 kubelet[22601]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:13 functional-331811 kubelet[22601]: E1209 04:56:13.374223   22601 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:13 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:13 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (346.736361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1209 04:54:31.980578 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1209 04:54:32.375476 1580521 retry.go:31] will retry after 3.397131675s: Temporary Error: Get "http://10.103.179.75": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:54:45.773365 1580521 retry.go:31] will retry after 2.668490016s: Temporary Error: Get "http://10.103.179.75": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:54:58.443165 1580521 retry.go:31] will retry after 9.525565153s: Temporary Error: Get "http://10.103.179.75": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:55:17.969922 1580521 retry.go:31] will retry after 12.319355663s: Temporary Error: Get "http://10.103.179.75": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1209 04:55:40.290826 1580521 retry.go:31] will retry after 21.656044707s: Temporary Error: Get "http://10.103.179.75": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1209 04:56:21.781366 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1209 04:57:35.060053 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (329.59948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (335.140391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image save kicbase/echo-server:functional-331811 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image rm kicbase/echo-server:functional-331811 --alsologtostderr                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image save --daemon kicbase/echo-server:functional-331811 --alsologtostderr                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /etc/ssl/certs/1580521.pem                                                                                                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /usr/share/ca-certificates/1580521.pem                                                                                     │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /etc/ssl/certs/15805212.pem                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /usr/share/ca-certificates/15805212.pem                                                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh sudo cat /etc/test/nested/copy/1580521/hosts                                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls --format short --alsologtostderr                                                                                               │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls --format yaml --alsologtostderr                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh            │ functional-331811 ssh pgrep buildkitd                                                                                                                     │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ image          │ functional-331811 image build -t localhost/my-image:functional-331811 testdata/build --alsologtostderr                                                    │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls --format json --alsologtostderr                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image          │ functional-331811 image ls --format table --alsologtostderr                                                                                               │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ update-context │ functional-331811 update-context --alsologtostderr -v=2                                                                                                   │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ update-context │ functional-331811 update-context --alsologtostderr -v=2                                                                                                   │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ update-context │ functional-331811 update-context --alsologtostderr -v=2                                                                                                   │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:56:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:56:29.281449 1637842 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:56:29.281669 1637842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.281696 1637842 out.go:374] Setting ErrFile to fd 2...
	I1209 04:56:29.281715 1637842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.282029 1637842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:56:29.282505 1637842 out.go:368] Setting JSON to false
	I1209 04:56:29.283419 1637842 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":34730,"bootTime":1765221460,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:56:29.283520 1637842 start.go:143] virtualization:  
	I1209 04:56:29.286665 1637842 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:56:29.289651 1637842 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:56:29.289721 1637842 notify.go:221] Checking for updates...
	I1209 04:56:29.293455 1637842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:56:29.296286 1637842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:56:29.299012 1637842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:56:29.301803 1637842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:56:29.305170 1637842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:56:29.308437 1637842 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:56:29.309049 1637842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:56:29.342793 1637842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:56:29.342909 1637842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.397279 1637842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.387560385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.397384 1637842 docker.go:319] overlay module found
	I1209 04:56:29.400602 1637842 out.go:179] * Using the docker driver based on existing profile
	I1209 04:56:29.403610 1637842 start.go:309] selected driver: docker
	I1209 04:56:29.403639 1637842 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.403735 1637842 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:56:29.403853 1637842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.458136 1637842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.448851846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.458670 1637842 cni.go:84] Creating CNI manager for ""
	I1209 04:56:29.458760 1637842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:56:29.458801 1637842 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.463801 1637842 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016673909Z" level=info msg="Checking image status: kicbase/echo-server:functional-331811" id=9cbd7c8a-77b2-4776-8d0a-1d5dda59c6e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016864385Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016906035Z" level=info msg="Image kicbase/echo-server:functional-331811 not found" id=9cbd7c8a-77b2-4776-8d0a-1d5dda59c6e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016966909Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-331811 found" id=9cbd7c8a-77b2-4776-8d0a-1d5dda59c6e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.04345603Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-331811" id=6bef986e-d18b-47c0-b9e4-fc7e3b931b02 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.043617755Z" level=info msg="Image docker.io/kicbase/echo-server:functional-331811 not found" id=6bef986e-d18b-47c0-b9e4-fc7e3b931b02 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.043657919Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-331811 found" id=6bef986e-d18b-47c0-b9e4-fc7e3b931b02 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.071514167Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-331811" id=e712b986-974a-4fc0-987d-0e3912ad1749 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.071658604Z" level=info msg="Image localhost/kicbase/echo-server:functional-331811 not found" id=e712b986-974a-4fc0-987d-0e3912ad1749 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.071701057Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-331811 found" id=e712b986-974a-4fc0-987d-0e3912ad1749 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139111094Z" level=info msg="Checking image status: kicbase/echo-server:functional-331811" id=2b8da84b-490c-4709-954a-74ca24e41d3b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139296203Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139336745Z" level=info msg="Image kicbase/echo-server:functional-331811 not found" id=2b8da84b-490c-4709-954a-74ca24e41d3b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139417033Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-331811 found" id=2b8da84b-490c-4709-954a-74ca24e41d3b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.165476005Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-331811" id=1a36fa98-d555-454b-9bda-2ec4b9fba78d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.165617881Z" level=info msg="Image docker.io/kicbase/echo-server:functional-331811 not found" id=1a36fa98-d555-454b-9bda-2ec4b9fba78d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.165660507Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-331811 found" id=1a36fa98-d555-454b-9bda-2ec4b9fba78d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.189948852Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-331811" id=52bef1b9-ad81-4e3e-940f-b08b69172945 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:58:23.992973   25441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:58:23.993419   25441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:58:23.994741   25441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:58:23.995440   25441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:58:23.997239   25441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:58:24 up  9:40,  0 user,  load average: 0.47, 0.37, 0.45
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:58:21 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:58:22 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 09 04:58:22 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:58:22 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:58:22 functional-331811 kubelet[25315]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:58:22 functional-331811 kubelet[25315]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:58:22 functional-331811 kubelet[25315]: E1209 04:58:22.372659   25315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:58:22 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:58:22 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:58:23 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 09 04:58:23 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:58:23 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:58:23 functional-331811 kubelet[25336]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:58:23 functional-331811 kubelet[25336]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:58:23 functional-331811 kubelet[25336]: E1209 04:58:23.142705   25336 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:58:23 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:58:23 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:58:23 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1294.
	Dec 09 04:58:23 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:58:23 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:58:23 functional-331811 kubelet[25417]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:58:23 functional-331811 kubelet[25417]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:58:23 functional-331811 kubelet[25417]: E1209 04:58:23.884524   25417 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:58:23 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:58:23 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (322.633441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-331811 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-331811 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (66.129981ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-331811 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-331811
helpers_test.go:243: (dbg) docker inspect functional-331811:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	        "Created": "2025-12-09T04:27:19.770188806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:27:19.828715728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hostname",
	        "HostsPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/hosts",
	        "LogPath": "/var/lib/docker/containers/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87/51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87-json.log",
	        "Name": "/functional-331811",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-331811:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-331811",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51da5dad63e98354657af83f0eeb217e0f4004466299db22cdebb5e50ec9af87",
	                "LowerDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2023d153f9a6568686e3dee3f0c1b8430e5547828e1ecdb5ae24bbc79aaf6685/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-331811",
	                "Source": "/var/lib/docker/volumes/functional-331811/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-331811",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-331811",
	                "name.minikube.sigs.k8s.io": "functional-331811",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c0753338127320f08906f0ae98414e1971b55970cf028db179c2214fd2722cb",
	            "SandboxKey": "/var/run/docker/netns/5c0753338127",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-331811": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:27:66:bb:a1:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c16962547dedb5d6155d1546bcc27e347ab5261f9ad46fc3b09cc8fb9cc112f",
	                    "EndpointID": "1a5d6a22e9497009b4121ea56dc4839e2ff8827d92252c0464236c5f49c11216",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-331811",
	                        "51da5dad63e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-331811 -n functional-331811: exit status 2 (324.170987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount1 --alsologtostderr -v=1                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ mount     │ -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount2 --alsologtostderr -v=1                      │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh findmnt -T /mount1                                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh findmnt -T /mount2                                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh findmnt -T /mount3                                                                                                                  │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ mount     │ -p functional-331811 --kill=true                                                                                                                          │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ start     │ -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ start     │ -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ start     │ -p functional-331811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-331811 --alsologtostderr -v=1                                                                                            │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ ssh       │ functional-331811 ssh sudo systemctl is-active docker                                                                                                     │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ ssh       │ functional-331811 ssh sudo systemctl is-active containerd                                                                                                 │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │                     │
	│ image     │ functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image save kicbase/echo-server:functional-331811 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image rm kicbase/echo-server:functional-331811 --alsologtostderr                                                                        │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image ls                                                                                                                                │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	│ image     │ functional-331811 image save --daemon kicbase/echo-server:functional-331811 --alsologtostderr                                                             │ functional-331811 │ jenkins │ v1.37.0 │ 09 Dec 25 04:56 UTC │ 09 Dec 25 04:56 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:56:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:56:29.281449 1637842 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:56:29.281669 1637842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.281696 1637842 out.go:374] Setting ErrFile to fd 2...
	I1209 04:56:29.281715 1637842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.282029 1637842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:56:29.282505 1637842 out.go:368] Setting JSON to false
	I1209 04:56:29.283419 1637842 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":34730,"bootTime":1765221460,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:56:29.283520 1637842 start.go:143] virtualization:  
	I1209 04:56:29.286665 1637842 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:56:29.289651 1637842 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:56:29.289721 1637842 notify.go:221] Checking for updates...
	I1209 04:56:29.293455 1637842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:56:29.296286 1637842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:56:29.299012 1637842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:56:29.301803 1637842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:56:29.305170 1637842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:56:29.308437 1637842 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:56:29.309049 1637842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:56:29.342793 1637842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:56:29.342909 1637842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.397279 1637842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.387560385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.397384 1637842 docker.go:319] overlay module found
	I1209 04:56:29.400602 1637842 out.go:179] * Using the docker driver based on existing profile
	I1209 04:56:29.403610 1637842 start.go:309] selected driver: docker
	I1209 04:56:29.403639 1637842 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.403735 1637842 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:56:29.403853 1637842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.458136 1637842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.448851846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.458670 1637842 cni.go:84] Creating CNI manager for ""
	I1209 04:56:29.458760 1637842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:56:29.458801 1637842 start.go:353] cluster config:
	{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.463801 1637842 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.930917732Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=60059689-b22e-4d2c-a555-518b088e6c52 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.93157629Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cbef184f-5cab-42ab-88e7-b508de5c76c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932075323Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=edcddd48-11b2-4a3e-b703-e9cffa332272 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932520767Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b8ee1139-0fe9-45a4-8cea-2e86a978a2fc name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.932923437Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=466ae3ad-f5a9-4d87-be0b-42f8886ae7b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933429871Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=52758864-5ad7-4972-9017-2c4a591649f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:50:11 functional-331811 crio[9992]: time="2025-12-09T04:50:11.933861662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=61e91b9e-e75b-4cf2-b677-070bdf524fb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016673909Z" level=info msg="Checking image status: kicbase/echo-server:functional-331811" id=9cbd7c8a-77b2-4776-8d0a-1d5dda59c6e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016864385Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016906035Z" level=info msg="Image kicbase/echo-server:functional-331811 not found" id=9cbd7c8a-77b2-4776-8d0a-1d5dda59c6e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.016966909Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-331811 found" id=9cbd7c8a-77b2-4776-8d0a-1d5dda59c6e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.04345603Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-331811" id=6bef986e-d18b-47c0-b9e4-fc7e3b931b02 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.043617755Z" level=info msg="Image docker.io/kicbase/echo-server:functional-331811 not found" id=6bef986e-d18b-47c0-b9e4-fc7e3b931b02 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.043657919Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-331811 found" id=6bef986e-d18b-47c0-b9e4-fc7e3b931b02 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.071514167Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-331811" id=e712b986-974a-4fc0-987d-0e3912ad1749 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.071658604Z" level=info msg="Image localhost/kicbase/echo-server:functional-331811 not found" id=e712b986-974a-4fc0-987d-0e3912ad1749 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:33 functional-331811 crio[9992]: time="2025-12-09T04:56:33.071701057Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-331811 found" id=e712b986-974a-4fc0-987d-0e3912ad1749 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139111094Z" level=info msg="Checking image status: kicbase/echo-server:functional-331811" id=2b8da84b-490c-4709-954a-74ca24e41d3b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139296203Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139336745Z" level=info msg="Image kicbase/echo-server:functional-331811 not found" id=2b8da84b-490c-4709-954a-74ca24e41d3b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.139417033Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-331811 found" id=2b8da84b-490c-4709-954a-74ca24e41d3b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.165476005Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-331811" id=1a36fa98-d555-454b-9bda-2ec4b9fba78d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.165617881Z" level=info msg="Image docker.io/kicbase/echo-server:functional-331811 not found" id=1a36fa98-d555-454b-9bda-2ec4b9fba78d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.165660507Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-331811 found" id=1a36fa98-d555-454b-9bda-2ec4b9fba78d name=/runtime.v1.ImageService/ImageStatus
	Dec 09 04:56:36 functional-331811 crio[9992]: time="2025-12-09T04:56:36.189948852Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-331811" id=52bef1b9-ad81-4e3e-940f-b08b69172945 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1209 04:56:38.691159   24065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:38.691578   24065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:38.692859   24065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:38.693552   24065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1209 04:56:38.695014   24065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 02:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 04:56:38 up  9:38,  0 user,  load average: 0.99, 0.40, 0.47
	Linux functional-331811 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 04:56:35 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:36 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1151.
	Dec 09 04:56:36 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:36 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:36 functional-331811 kubelet[23876]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:36 functional-331811 kubelet[23876]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:36 functional-331811 kubelet[23876]: E1209 04:56:36.629454   23876 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:36 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:36 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:37 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 09 04:56:37 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:37 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:37 functional-331811 kubelet[23926]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:37 functional-331811 kubelet[23926]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:37 functional-331811 kubelet[23926]: E1209 04:56:37.384077   23926 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:37 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:37 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 04:56:38 functional-331811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 09 04:56:38 functional-331811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:38 functional-331811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 04:56:38 functional-331811 kubelet[23979]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:38 functional-331811 kubelet[23979]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 04:56:38 functional-331811 kubelet[23979]: E1209 04:56:38.146452   23979 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 04:56:38 functional-331811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 04:56:38 functional-331811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-331811 -n functional-331811: exit status 2 (338.294354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-331811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1209 04:54:21.782177 1633570 out.go:360] Setting OutFile to fd 1 ...
I1209 04:54:21.783509 1633570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:54:21.783534 1633570 out.go:374] Setting ErrFile to fd 2...
I1209 04:54:21.783541 1633570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:54:21.783905 1633570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:54:21.784284 1633570 mustload.go:66] Loading cluster: functional-331811
I1209 04:54:21.784916 1633570 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:54:21.785594 1633570 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:54:21.816905 1633570 host.go:66] Checking if "functional-331811" exists ...
I1209 04:54:21.817312 1633570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:54:21.933596 1633570 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:54:21.923312256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:54:21.933718 1633570 api_server.go:166] Checking apiserver status ...
I1209 04:54:21.933773 1633570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 04:54:21.933809 1633570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:54:21.991951 1633570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
W1209 04:54:22.137212 1633570 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 04:54:22.140699 1633570 out.go:179] * The control-plane node functional-331811 apiserver is not running: (state=Stopped)
I1209 04:54:22.143661 1633570 out.go:179]   To start a cluster, run: "minikube start -p functional-331811"

                                                
                                                
stdout: * The control-plane node functional-331811 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-331811"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1633571: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-331811 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-331811 apply -f testdata/testsvc.yaml: exit status 1 (113.012508ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-331811 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (109.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.103.179.75": Temporary Error: Get "http://10.103.179.75": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-331811 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-331811 get svc nginx-svc: exit status 1 (69.975367ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-331811 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (109.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-331811 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-331811 create deployment hello-node --image kicbase/echo-server: exit status 1 (53.648263ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-331811 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 service list: exit status 103 (265.725426ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-331811 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-331811"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-331811 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-331811 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-331811\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 service list -o json: exit status 103 (287.851371ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-331811 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-331811"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-331811 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 service --namespace=default --https --url hello-node: exit status 103 (262.158787ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-331811 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-331811"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-331811 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 service hello-node --url --format={{.IP}}: exit status 103 (289.18992ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-331811 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-331811"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-331811 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-331811 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-331811\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 service hello-node --url: exit status 103 (273.896179ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-331811 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-331811"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-331811 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-331811 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-331811"
functional_test.go:1579: failed to parse "* The control-plane node functional-331811 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-331811\"": parse "* The control-plane node functional-331811 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-331811\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765256179538005159" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765256179538005159" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765256179538005159" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001/test-1765256179538005159
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.091867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:56:19.878406 1580521 retry.go:31] will retry after 466.395572ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 04:56 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 04:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 04:56 test-1765256179538005159
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh cat /mount-9p/test-1765256179538005159
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-331811 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-331811 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (64.517133ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-331811 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (269.657911ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=39389)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  9 04:56 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  9 04:56 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  9 04:56 test-1765256179538005159
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-331811 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:39389
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001:/mount-9p --alsologtostderr -v=1] stderr:
I1209 04:56:19.592588 1635911 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:19.592738 1635911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:19.592744 1635911 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:19.592750 1635911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:19.593027 1635911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:19.593292 1635911 mustload.go:66] Loading cluster: functional-331811
I1209 04:56:19.593669 1635911 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:19.594178 1635911 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:19.623232 1635911 host.go:66] Checking if "functional-331811" exists ...
I1209 04:56:19.623538 1635911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:56:19.725597 1635911 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:19.714159996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:56:19.725771 1635911 cli_runner.go:164] Run: docker network inspect functional-331811 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 04:56:19.748812 1635911 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001 into VM as /mount-9p ...
I1209 04:56:19.751868 1635911 out.go:179]   - Mount type:   9p
I1209 04:56:19.754840 1635911 out.go:179]   - User ID:      docker
I1209 04:56:19.757713 1635911 out.go:179]   - Group ID:     docker
I1209 04:56:19.760549 1635911 out.go:179]   - Version:      9p2000.L
I1209 04:56:19.763460 1635911 out.go:179]   - Message Size: 262144
I1209 04:56:19.766566 1635911 out.go:179]   - Options:      map[]
I1209 04:56:19.769857 1635911 out.go:179]   - Bind Address: 192.168.49.1:39389
I1209 04:56:19.772692 1635911 out.go:179] * Userspace file server: 
I1209 04:56:19.772987 1635911 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1209 04:56:19.773078 1635911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:19.795272 1635911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
I1209 04:56:19.901168 1635911 mount.go:180] unmount for /mount-9p ran successfully
I1209 04:56:19.901194 1635911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1209 04:56:19.909525 1635911 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=39389,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1209 04:56:19.919838 1635911 main.go:127] stdlog: ufs.go:141 connected
I1209 04:56:19.919985 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tversion tag 65535 msize 262144 version '9P2000.L'
I1209 04:56:19.920030 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rversion tag 65535 msize 262144 version '9P2000'
I1209 04:56:19.920255 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1209 04:56:19.920319 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rattach tag 0 aqid (ed751e 177af4e 'd')
I1209 04:56:19.920599 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 0
I1209 04:56:19.920649 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed751e 177af4e 'd') m d775 at 0 mt 1765256179 l 4096 t 0 d 0 ext )
I1209 04:56:19.924435 1635911 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/.mount-process: {Name:mk10df6d449a423021c2f2fe0ce9a9f7f57f6e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:56:19.924647 1635911 mount.go:105] mount successful: ""
I1209 04:56:19.928311 1635911 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1726689791/001 to /mount-9p
I1209 04:56:19.931861 1635911 out.go:203] 
I1209 04:56:19.935124 1635911 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1209 04:56:20.863536 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 0
I1209 04:56:20.863621 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed751e 177af4e 'd') m d775 at 0 mt 1765256179 l 4096 t 0 d 0 ext )
I1209 04:56:20.864004 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 1 
I1209 04:56:20.864081 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 
I1209 04:56:20.864199 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Topen tag 0 fid 1 mode 0
I1209 04:56:20.864251 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Ropen tag 0 qid (ed751e 177af4e 'd') iounit 0
I1209 04:56:20.864342 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 0
I1209 04:56:20.864380 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed751e 177af4e 'd') m d775 at 0 mt 1765256179 l 4096 t 0 d 0 ext )
I1209 04:56:20.864514 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 0 count 262120
I1209 04:56:20.864630 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 258
I1209 04:56:20.864750 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 258 count 261862
I1209 04:56:20.864784 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:20.864880 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:56:20.864916 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:20.865024 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1209 04:56:20.865060 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed751f 177af4e '') 
I1209 04:56:20.865154 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:20.865193 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed751f 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:20.865290 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:20.865319 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed751f 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:20.865418 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:20.865456 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:20.865560 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 2 0:'test-1765256179538005159' 
I1209 04:56:20.865604 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed7521 177af4e '') 
I1209 04:56:20.865700 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:20.865736 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('test-1765256179538005159' 'jenkins' 'jenkins' '' q (ed7521 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:20.865832 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:20.865877 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('test-1765256179538005159' 'jenkins' 'jenkins' '' q (ed7521 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:20.865977 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:20.866003 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:20.866101 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1209 04:56:20.866142 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed7520 177af4e '') 
I1209 04:56:20.866243 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:20.866282 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed7520 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:20.866379 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:20.866413 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed7520 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:20.866530 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:20.866555 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:20.866667 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:56:20.866700 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:20.866816 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 1
I1209 04:56:20.866849 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.147214 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 1 0:'test-1765256179538005159' 
I1209 04:56:21.147292 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed7521 177af4e '') 
I1209 04:56:21.147497 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 1
I1209 04:56:21.147557 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('test-1765256179538005159' 'jenkins' 'jenkins' '' q (ed7521 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.147708 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 1 newfid 2 
I1209 04:56:21.147744 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 
I1209 04:56:21.147880 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Topen tag 0 fid 2 mode 0
I1209 04:56:21.147928 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Ropen tag 0 qid (ed7521 177af4e '') iounit 0
I1209 04:56:21.148089 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 1
I1209 04:56:21.148124 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('test-1765256179538005159' 'jenkins' 'jenkins' '' q (ed7521 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.148285 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 2 offset 0 count 262120
I1209 04:56:21.148328 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 24
I1209 04:56:21.148447 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 2 offset 24 count 262120
I1209 04:56:21.148491 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:21.148636 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 2 offset 24 count 262120
I1209 04:56:21.148687 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:21.148924 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:21.148955 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.149148 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 1
I1209 04:56:21.149177 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.485413 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 0
I1209 04:56:21.485504 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed751e 177af4e 'd') m d775 at 0 mt 1765256179 l 4096 t 0 d 0 ext )
I1209 04:56:21.485858 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 1 
I1209 04:56:21.485894 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 
I1209 04:56:21.486034 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Topen tag 0 fid 1 mode 0
I1209 04:56:21.486084 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Ropen tag 0 qid (ed751e 177af4e 'd') iounit 0
I1209 04:56:21.486232 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 0
I1209 04:56:21.486283 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed751e 177af4e 'd') m d775 at 0 mt 1765256179 l 4096 t 0 d 0 ext )
I1209 04:56:21.486458 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 0 count 262120
I1209 04:56:21.486566 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 258
I1209 04:56:21.486724 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 258 count 261862
I1209 04:56:21.486757 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:21.486882 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:56:21.486910 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:21.487065 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1209 04:56:21.487121 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed751f 177af4e '') 
I1209 04:56:21.487258 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:21.487295 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed751f 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.487431 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:21.487471 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed751f 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.487600 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:21.487620 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.487761 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 2 0:'test-1765256179538005159' 
I1209 04:56:21.487808 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed7521 177af4e '') 
I1209 04:56:21.487925 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:21.487973 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('test-1765256179538005159' 'jenkins' 'jenkins' '' q (ed7521 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.488109 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:21.488138 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('test-1765256179538005159' 'jenkins' 'jenkins' '' q (ed7521 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.488274 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:21.488298 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.488445 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1209 04:56:21.488476 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rwalk tag 0 (ed7520 177af4e '') 
I1209 04:56:21.488602 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:21.488636 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed7520 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.488767 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tstat tag 0 fid 2
I1209 04:56:21.488800 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed7520 177af4e '') m 644 at 0 mt 1765256179 l 24 t 0 d 0 ext )
I1209 04:56:21.488923 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 2
I1209 04:56:21.488941 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.489072 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tread tag 0 fid 1 offset 258 count 262120
I1209 04:56:21.489095 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rread tag 0 count 0
I1209 04:56:21.489232 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 1
I1209 04:56:21.489258 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.490495 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1209 04:56:21.490601 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rerror tag 0 ename 'file not found' ecode 0
I1209 04:56:21.791704 1635911 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:55590 Tclunk tag 0 fid 0
I1209 04:56:21.791762 1635911 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:55590 Rclunk tag 0
I1209 04:56:21.792938 1635911 main.go:127] stdlog: ufs.go:147 disconnected
I1209 04:56:21.816559 1635911 out.go:179] * Unmounting /mount-9p ...
I1209 04:56:21.819756 1635911 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1209 04:56:21.826930 1635911 mount.go:180] unmount for /mount-9p ran successfully
I1209 04:56:21.827048 1635911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/.mount-process: {Name:mk10df6d449a423021c2f2fe0ce9a9f7f57f6e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:56:21.830217 1635911 out.go:203] 
W1209 04:56:21.833312 1635911 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1209 04:56:21.836249 1635911 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (509.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node start m02 --alsologtostderr -v 5
E1209 05:04:22.262552 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:04:24.854338 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:04:31.980482 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:04:49.971129 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:06:21.780957 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:09:22.262801 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:09:31.980130 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:11:21.780885 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 node start m02 --alsologtostderr -v 5: exit status 80 (7m42.733162492s)

                                                
                                                
-- stdout --
	* Starting "ha-634473-m02" control-plane node in "ha-634473" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:03:51.435168 1656544 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:03:51.436036 1656544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:03:51.436051 1656544 out.go:374] Setting ErrFile to fd 2...
	I1209 05:03:51.436056 1656544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:03:51.436339 1656544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:03:51.436653 1656544 mustload.go:66] Loading cluster: ha-634473
	I1209 05:03:51.437074 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:03:51.437675 1656544 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	W1209 05:03:51.462230 1656544 host.go:58] "ha-634473-m02" host status: Stopped
	I1209 05:03:51.465862 1656544 out.go:179] * Starting "ha-634473-m02" control-plane node in "ha-634473" cluster
	I1209 05:03:51.468756 1656544 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 05:03:51.471578 1656544 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:03:51.474424 1656544 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 05:03:51.474464 1656544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:03:51.474479 1656544 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 05:03:51.474497 1656544 cache.go:65] Caching tarball of preloaded images
	I1209 05:03:51.474635 1656544 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 05:03:51.474647 1656544 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 05:03:51.474794 1656544 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 05:03:51.498336 1656544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:03:51.498374 1656544 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:03:51.498393 1656544 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:03:51.498417 1656544 start.go:360] acquireMachinesLock for ha-634473-m02: {Name:mk12a21800248c722fe299fa0c218c0fccb4ad14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:03:51.498483 1656544 start.go:364] duration metric: took 43.086µs to acquireMachinesLock for "ha-634473-m02"
	I1209 05:03:51.498502 1656544 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:03:51.498508 1656544 fix.go:54] fixHost starting: m02
	I1209 05:03:51.499236 1656544 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:03:51.523631 1656544 fix.go:112] recreateIfNeeded on ha-634473-m02: state=Stopped err=<nil>
	W1209 05:03:51.523673 1656544 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:03:51.526915 1656544 out.go:252] * Restarting existing docker container for "ha-634473-m02" ...
	I1209 05:03:51.527013 1656544 cli_runner.go:164] Run: docker start ha-634473-m02
	I1209 05:03:51.845417 1656544 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:03:51.868479 1656544 kic.go:430] container "ha-634473-m02" state is running.
	I1209 05:03:51.868931 1656544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:03:51.900994 1656544 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 05:03:51.901258 1656544 machine.go:94] provisionDockerMachine start ...
	I1209 05:03:51.901329 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:51.926464 1656544 main.go:143] libmachine: Using SSH client type: native
	I1209 05:03:51.926961 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1209 05:03:51.926989 1656544 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:03:51.928537 1656544 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:03:55.148968 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m02
	
	I1209 05:03:55.148995 1656544 ubuntu.go:182] provisioning hostname "ha-634473-m02"
	I1209 05:03:55.149082 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:55.184191 1656544 main.go:143] libmachine: Using SSH client type: native
	I1209 05:03:55.184505 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1209 05:03:55.184519 1656544 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-634473-m02 && echo "ha-634473-m02" | sudo tee /etc/hostname
	I1209 05:03:55.404475 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m02
	
	I1209 05:03:55.404559 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:55.425898 1656544 main.go:143] libmachine: Using SSH client type: native
	I1209 05:03:55.426311 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1209 05:03:55.426364 1656544 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-634473-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-634473-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-634473-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:03:55.611827 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:03:55.611906 1656544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 05:03:55.611950 1656544 ubuntu.go:190] setting up certificates
	I1209 05:03:55.611999 1656544 provision.go:84] configureAuth start
	I1209 05:03:55.612106 1656544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:03:55.632748 1656544 provision.go:143] copyHostCerts
	I1209 05:03:55.632793 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:03:55.632827 1656544 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 05:03:55.632843 1656544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:03:55.632919 1656544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 05:03:55.633010 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:03:55.633026 1656544 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 05:03:55.633031 1656544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:03:55.633058 1656544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 05:03:55.633108 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:03:55.633127 1656544 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 05:03:55.633132 1656544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:03:55.633158 1656544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 05:03:55.633211 1656544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.ha-634473-m02 san=[127.0.0.1 192.168.49.3 ha-634473-m02 localhost minikube]
	I1209 05:03:55.991923 1656544 provision.go:177] copyRemoteCerts
	I1209 05:03:55.992036 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:03:55.992112 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:56.012680 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:03:56.135240 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 05:03:56.135307 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:03:56.161360 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 05:03:56.161445 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 05:03:56.190029 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 05:03:56.190144 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:03:56.222046 1656544 provision.go:87] duration metric: took 610.006359ms to configureAuth
	I1209 05:03:56.222123 1656544 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:03:56.222426 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:03:56.222592 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:56.242795 1656544 main.go:143] libmachine: Using SSH client type: native
	I1209 05:03:56.243186 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1209 05:03:56.243205 1656544 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 05:03:58.374477 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 05:03:58.374561 1656544 machine.go:97] duration metric: took 6.473284553s to provisionDockerMachine
	I1209 05:03:58.374609 1656544 start.go:293] postStartSetup for "ha-634473-m02" (driver="docker")
	I1209 05:03:58.374646 1656544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:03:58.374772 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:03:58.374853 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:58.393230 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:03:58.507308 1656544 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:03:58.511043 1656544 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:03:58.511075 1656544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:03:58.511087 1656544 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 05:03:58.511158 1656544 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 05:03:58.511246 1656544 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 05:03:58.511258 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 05:03:58.511357 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:03:58.523833 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:03:58.549917 1656544 start.go:296] duration metric: took 175.26714ms for postStartSetup
	I1209 05:03:58.550038 1656544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:03:58.550109 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:58.569060 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:03:58.675873 1656544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:03:58.681219 1656544 fix.go:56] duration metric: took 7.18268632s for fixHost
	I1209 05:03:58.681248 1656544 start.go:83] releasing machines lock for "ha-634473-m02", held for 7.182756483s
	I1209 05:03:58.681337 1656544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:03:58.711358 1656544 ssh_runner.go:195] Run: systemctl --version
	I1209 05:03:58.711425 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:58.711718 1656544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:03:58.711777 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:03:58.740473 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:03:58.749700 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:03:58.984738 1656544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 05:03:59.055001 1656544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:03:59.062173 1656544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:03:59.062274 1656544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:03:59.080917 1656544 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:03:59.080944 1656544 start.go:496] detecting cgroup driver to use...
	I1209 05:03:59.081007 1656544 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:03:59.081080 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 05:03:59.104150 1656544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 05:03:59.128551 1656544 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:03:59.128643 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:03:59.164249 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:03:59.188801 1656544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:03:59.501765 1656544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:03:59.757845 1656544 docker.go:234] disabling docker service ...
	I1209 05:03:59.757966 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:03:59.780832 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:03:59.812455 1656544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:04:00.289198 1656544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:04:00.711930 1656544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:04:00.729939 1656544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:04:00.756266 1656544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 05:04:00.756380 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.785216 1656544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 05:04:00.785307 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.804161 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.833127 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.847234 1656544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:04:00.863092 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.879560 1656544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.893610 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:04:00.904832 1656544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:04:00.921205 1656544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:04:00.934493 1656544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:04:01.219696 1656544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 05:05:31.557170 1656544 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.337439836s)
	I1209 05:05:31.557212 1656544 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 05:05:31.557264 1656544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 05:05:31.561813 1656544 start.go:564] Will wait 60s for crictl version
	I1209 05:05:31.561882 1656544 ssh_runner.go:195] Run: which crictl
	I1209 05:05:31.565284 1656544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:05:31.596092 1656544 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 05:05:31.596174 1656544 ssh_runner.go:195] Run: crio --version
	I1209 05:05:31.631795 1656544 ssh_runner.go:195] Run: crio --version
	I1209 05:05:31.675160 1656544 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 05:05:31.678113 1656544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:05:31.753932 1656544 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:05:31.743881306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:05:31.754108 1656544 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:05:31.780833 1656544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 05:05:31.785192 1656544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:05:31.797716 1656544 mustload.go:66] Loading cluster: ha-634473
	I1209 05:05:31.797983 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:05:31.798258 1656544 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:05:31.819964 1656544 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:05:31.820417 1656544 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473 for IP: 192.168.49.3
	I1209 05:05:31.820430 1656544 certs.go:195] generating shared ca certs ...
	I1209 05:05:31.820445 1656544 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:05:31.820673 1656544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 05:05:31.820724 1656544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 05:05:31.820732 1656544 certs.go:257] generating profile certs ...
	I1209 05:05:31.820815 1656544 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key
	I1209 05:05:31.820842 1656544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb
	I1209 05:05:31.820855 1656544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1209 05:05:32.303744 1656544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb ...
	I1209 05:05:32.303778 1656544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb: {Name:mkf62498bdc03b83355b67cc140ab28a6436c1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:05:32.303966 1656544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb ...
	I1209 05:05:32.303984 1656544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb: {Name:mkb2df0cf1be72f129409dd291f8dd6f4a8ddc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:05:32.304076 1656544 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt
	I1209 05:05:32.304219 1656544 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key
	I1209 05:05:32.304356 1656544 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key
	I1209 05:05:32.304375 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 05:05:32.304391 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 05:05:32.304408 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 05:05:32.304419 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 05:05:32.304444 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 05:05:32.304460 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 05:05:32.304476 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 05:05:32.304490 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 05:05:32.304546 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 05:05:32.304586 1656544 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 05:05:32.304598 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:05:32.304627 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:05:32.304655 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:05:32.304686 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 05:05:32.304734 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:05:32.304772 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 05:05:32.304789 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 05:05:32.304801 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:05:32.304870 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:05:32.324285 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:05:32.426935 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 05:05:32.432249 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 05:05:32.441161 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 05:05:32.444967 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 05:05:32.453235 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 05:05:32.456851 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 05:05:32.465808 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 05:05:32.470006 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 05:05:32.480785 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 05:05:32.484606 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 05:05:32.493895 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 05:05:32.497815 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 05:05:32.516330 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:05:32.537129 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 05:05:32.557210 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:05:32.577761 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 05:05:32.599181 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 05:05:32.621075 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 05:05:32.641425 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:05:32.661666 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:05:32.684550 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 05:05:32.705359 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 05:05:32.726767 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:05:32.746959 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 05:05:32.764178 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 05:05:32.779337 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 05:05:32.793734 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 05:05:32.814444 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 05:05:32.829256 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 05:05:32.843106 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 05:05:32.856844 1656544 ssh_runner.go:195] Run: openssl version
	I1209 05:05:32.863844 1656544 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:05:32.872354 1656544 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:05:32.880202 1656544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:05:32.884261 1656544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:05:32.884355 1656544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:05:32.928566 1656544 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:05:32.936686 1656544 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 05:05:32.944773 1656544 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 05:05:32.953219 1656544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 05:05:32.957805 1656544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 05:05:32.957881 1656544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 05:05:33.000409 1656544 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:05:33.010898 1656544 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 05:05:33.019893 1656544 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 05:05:33.029698 1656544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 05:05:33.033720 1656544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 05:05:33.033884 1656544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 05:05:33.076130 1656544 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:05:33.084255 1656544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:05:33.088219 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:05:33.130734 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:05:33.172502 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:05:33.214148 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:05:33.263175 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:05:33.327359 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:05:33.401807 1656544 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1209 05:05:33.402004 1656544 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-634473-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:05:33.402071 1656544 kube-vip.go:115] generating kube-vip config ...
	I1209 05:05:33.402156 1656544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1209 05:05:33.424236 1656544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:05:33.424391 1656544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 05:05:33.424484 1656544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 05:05:33.438843 1656544 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:05:33.438962 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 05:05:33.449469 1656544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 05:05:33.468645 1656544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 05:05:33.488036 1656544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1209 05:05:33.516514 1656544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1209 05:05:33.521687 1656544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:05:33.538143 1656544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:05:33.782180 1656544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:05:33.800950 1656544 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 05:05:33.801130 1656544 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:05:33.801287 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:05:33.804721 1656544 out.go:179] * Verifying Kubernetes components...
	I1209 05:05:33.804820 1656544 out.go:179] * Enabled addons: 
	I1209 05:05:33.808603 1656544 addons.go:530] duration metric: took 7.468135ms for enable addons: enabled=[]
	I1209 05:05:33.808702 1656544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:05:34.059643 1656544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:05:34.084955 1656544 kapi.go:59] client config for ha-634473: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 05:05:34.085123 1656544 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1209 05:05:34.085721 1656544 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 05:05:34.085765 1656544 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 05:05:34.085809 1656544 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 05:05:34.085859 1656544 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 05:05:34.085891 1656544 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 05:05:34.085915 1656544 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 05:05:34.086248 1656544 node_ready.go:35] waiting up to 6m0s for node "ha-634473-m02" to be "Ready" ...
	W1209 05:05:36.103651 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:38.590482 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:41.090270 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:43.591188 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:46.090541 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:48.090696 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:50.590421 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:53.090228 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:55.090848 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:57.590322 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:05:59.591829 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:02.090334 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:04.590162 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:07.089973 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:09.090315 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:11.589951 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:13.590812 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:15.592504 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:18.090610 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:20.590772 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:23.090837 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:25.590759 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:28.090621 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:30.091664 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:32.592290 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:35.090015 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:37.590450 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:39.591042 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:42.094994 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:44.590034 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:46.590090 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:48.590513 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:51.089861 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:53.090622 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:55.590916 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:06:58.090796 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:00.113034 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:02.590904 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:05.089809 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:07.090521 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:09.592281 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:12.090181 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:14.091213 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:16.590051 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:18.590762 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:21.090500 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:23.590507 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:25.600190 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:28.090563 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:30.590868 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:33.090093 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:35.091497 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:37.590538 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:40.089723 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:42.090511 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:44.590357 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:47.089837 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:49.090700 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:51.091091 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:53.589912 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:55.590458 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:07:58.090913 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:00.135445 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:02.589896 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:04.590245 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:07.090170 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:09.090895 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:11.091065 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:13.590732 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:16.090642 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:18.090799 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:20.590842 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:23.089800 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:25.091440 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:27.591306 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:30.091240 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:32.590458 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:34.590739 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:37.091066 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:39.590078 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:41.590161 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:44.090115 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:46.090227 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:48.590884 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:51.090772 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:53.590503 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:56.090744 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:08:58.090867 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:00.106676 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:02.592047 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:05.090410 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:07.590683 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:10.090393 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:12.590255 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:14.595109 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:17.090567 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:19.091670 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:21.591661 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:24.090547 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:26.094243 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:28.590556 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:30.591037 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:33.090610 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:35.591096 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:38.090878 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:40.590542 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:42.590796 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:45.107215 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:47.592627 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:50.091275 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:52.594090 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:55.090645 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:57.090842 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:09:59.590649 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:02.090946 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:04.091223 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:06.589904 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:08.590474 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:11.090938 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:13.590021 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:15.590473 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:18.090505 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:20.091541 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:22.590204 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:25.091859 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:27.589626 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:29.590247 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:32.090311 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:34.090638 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:36.590552 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:38.590781 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:41.089835 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:43.089984 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:45.092620 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:47.590471 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:50.090359 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:52.091099 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:54.595591 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:57.090552 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:10:59.589607 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:01.591105 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:04.092654 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:06.589996 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:08.590738 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:10.591222 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:13.091038 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:15.091178 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:17.589989 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:19.591033 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:22.090902 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:24.589955 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:27.090062 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:29.590406 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	W1209 05:11:32.091164 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
	I1209 05:11:34.086991 1656544 node_ready.go:38] duration metric: took 6m0.000689641s for node "ha-634473-m02" to be "Ready" ...
	I1209 05:11:34.089993 1656544 out.go:203] 
	W1209 05:11:34.092799 1656544 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1209 05:11:34.092825 1656544 out.go:285] * 
	* 
	W1209 05:11:34.101276 1656544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:11:34.104378 1656544 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1209 05:03:51.435168 1656544 out.go:360] Setting OutFile to fd 1 ...
I1209 05:03:51.436036 1656544 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 05:03:51.436051 1656544 out.go:374] Setting ErrFile to fd 2...
I1209 05:03:51.436056 1656544 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 05:03:51.436339 1656544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 05:03:51.436653 1656544 mustload.go:66] Loading cluster: ha-634473
I1209 05:03:51.437074 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 05:03:51.437675 1656544 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
W1209 05:03:51.462230 1656544 host.go:58] "ha-634473-m02" host status: Stopped
I1209 05:03:51.465862 1656544 out.go:179] * Starting "ha-634473-m02" control-plane node in "ha-634473" cluster
I1209 05:03:51.468756 1656544 cache.go:134] Beginning downloading kic base image for docker with crio
I1209 05:03:51.471578 1656544 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
I1209 05:03:51.474424 1656544 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1209 05:03:51.474464 1656544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
I1209 05:03:51.474479 1656544 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
I1209 05:03:51.474497 1656544 cache.go:65] Caching tarball of preloaded images
I1209 05:03:51.474635 1656544 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
I1209 05:03:51.474647 1656544 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1209 05:03:51.474794 1656544 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
I1209 05:03:51.498336 1656544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
I1209 05:03:51.498374 1656544 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
I1209 05:03:51.498393 1656544 cache.go:243] Successfully downloaded all kic artifacts
I1209 05:03:51.498417 1656544 start.go:360] acquireMachinesLock for ha-634473-m02: {Name:mk12a21800248c722fe299fa0c218c0fccb4ad14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 05:03:51.498483 1656544 start.go:364] duration metric: took 43.086µs to acquireMachinesLock for "ha-634473-m02"
I1209 05:03:51.498502 1656544 start.go:96] Skipping create...Using existing machine configuration
I1209 05:03:51.498508 1656544 fix.go:54] fixHost starting: m02
I1209 05:03:51.499236 1656544 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
I1209 05:03:51.523631 1656544 fix.go:112] recreateIfNeeded on ha-634473-m02: state=Stopped err=<nil>
W1209 05:03:51.523673 1656544 fix.go:138] unexpected machine state, will restart: <nil>
I1209 05:03:51.526915 1656544 out.go:252] * Restarting existing docker container for "ha-634473-m02" ...
I1209 05:03:51.527013 1656544 cli_runner.go:164] Run: docker start ha-634473-m02
I1209 05:03:51.845417 1656544 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
I1209 05:03:51.868479 1656544 kic.go:430] container "ha-634473-m02" state is running.
I1209 05:03:51.868931 1656544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
I1209 05:03:51.900994 1656544 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
I1209 05:03:51.901258 1656544 machine.go:94] provisionDockerMachine start ...
I1209 05:03:51.901329 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:51.926464 1656544 main.go:143] libmachine: Using SSH client type: native
I1209 05:03:51.926961 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
I1209 05:03:51.926989 1656544 main.go:143] libmachine: About to run SSH command:
hostname
I1209 05:03:51.928537 1656544 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1209 05:03:55.148968 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m02

                                                
                                                
I1209 05:03:55.148995 1656544 ubuntu.go:182] provisioning hostname "ha-634473-m02"
I1209 05:03:55.149082 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:55.184191 1656544 main.go:143] libmachine: Using SSH client type: native
I1209 05:03:55.184505 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
I1209 05:03:55.184519 1656544 main.go:143] libmachine: About to run SSH command:
sudo hostname ha-634473-m02 && echo "ha-634473-m02" | sudo tee /etc/hostname
I1209 05:03:55.404475 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m02

                                                
                                                
I1209 05:03:55.404559 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:55.425898 1656544 main.go:143] libmachine: Using SSH client type: native
I1209 05:03:55.426311 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
I1209 05:03:55.426364 1656544 main.go:143] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-634473-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-634473-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-634473-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I1209 05:03:55.611827 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: 
I1209 05:03:55.611906 1656544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
I1209 05:03:55.611950 1656544 ubuntu.go:190] setting up certificates
I1209 05:03:55.611999 1656544 provision.go:84] configureAuth start
I1209 05:03:55.612106 1656544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
I1209 05:03:55.632748 1656544 provision.go:143] copyHostCerts
I1209 05:03:55.632793 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
I1209 05:03:55.632827 1656544 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
I1209 05:03:55.632843 1656544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
I1209 05:03:55.632919 1656544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
I1209 05:03:55.633010 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
I1209 05:03:55.633026 1656544 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
I1209 05:03:55.633031 1656544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
I1209 05:03:55.633058 1656544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
I1209 05:03:55.633108 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
I1209 05:03:55.633127 1656544 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
I1209 05:03:55.633132 1656544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
I1209 05:03:55.633158 1656544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
I1209 05:03:55.633211 1656544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.ha-634473-m02 san=[127.0.0.1 192.168.49.3 ha-634473-m02 localhost minikube]
I1209 05:03:55.991923 1656544 provision.go:177] copyRemoteCerts
I1209 05:03:55.992036 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 05:03:55.992112 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:56.012680 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
I1209 05:03:56.135240 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1209 05:03:56.135307 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 05:03:56.161360 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
I1209 05:03:56.161445 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1209 05:03:56.190029 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1209 05:03:56.190144 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1209 05:03:56.222046 1656544 provision.go:87] duration metric: took 610.006359ms to configureAuth
I1209 05:03:56.222123 1656544 ubuntu.go:206] setting minikube options for container-runtime
I1209 05:03:56.222426 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 05:03:56.222592 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:56.242795 1656544 main.go:143] libmachine: Using SSH client type: native
I1209 05:03:56.243186 1656544 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
I1209 05:03:56.243205 1656544 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1209 05:03:58.374477 1656544 main.go:143] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

                                                
                                                
I1209 05:03:58.374561 1656544 machine.go:97] duration metric: took 6.473284553s to provisionDockerMachine
I1209 05:03:58.374609 1656544 start.go:293] postStartSetup for "ha-634473-m02" (driver="docker")
I1209 05:03:58.374646 1656544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 05:03:58.374772 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 05:03:58.374853 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:58.393230 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
I1209 05:03:58.507308 1656544 ssh_runner.go:195] Run: cat /etc/os-release
I1209 05:03:58.511043 1656544 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1209 05:03:58.511075 1656544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1209 05:03:58.511087 1656544 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
I1209 05:03:58.511158 1656544 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
I1209 05:03:58.511246 1656544 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
I1209 05:03:58.511258 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
I1209 05:03:58.511357 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1209 05:03:58.523833 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
I1209 05:03:58.549917 1656544 start.go:296] duration metric: took 175.26714ms for postStartSetup
I1209 05:03:58.550038 1656544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1209 05:03:58.550109 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:58.569060 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
I1209 05:03:58.675873 1656544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1209 05:03:58.681219 1656544 fix.go:56] duration metric: took 7.18268632s for fixHost
I1209 05:03:58.681248 1656544 start.go:83] releasing machines lock for "ha-634473-m02", held for 7.182756483s
I1209 05:03:58.681337 1656544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
I1209 05:03:58.711358 1656544 ssh_runner.go:195] Run: systemctl --version
I1209 05:03:58.711425 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:58.711718 1656544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 05:03:58.711777 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
I1209 05:03:58.740473 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
I1209 05:03:58.749700 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
I1209 05:03:58.984738 1656544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1209 05:03:59.055001 1656544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1209 05:03:59.062173 1656544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1209 05:03:59.062274 1656544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 05:03:59.080917 1656544 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1209 05:03:59.080944 1656544 start.go:496] detecting cgroup driver to use...
I1209 05:03:59.081007 1656544 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1209 05:03:59.081080 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1209 05:03:59.104150 1656544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1209 05:03:59.128551 1656544 docker.go:218] disabling cri-docker service (if available) ...
I1209 05:03:59.128643 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 05:03:59.164249 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 05:03:59.188801 1656544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 05:03:59.501765 1656544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 05:03:59.757845 1656544 docker.go:234] disabling docker service ...
I1209 05:03:59.757966 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 05:03:59.780832 1656544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 05:03:59.812455 1656544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 05:04:00.289198 1656544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 05:04:00.711930 1656544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 05:04:00.729939 1656544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1209 05:04:00.756266 1656544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1209 05:04:00.756380 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.785216 1656544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1209 05:04:00.785307 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.804161 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.833127 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.847234 1656544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 05:04:00.863092 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.879560 1656544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.893610 1656544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1209 05:04:00.904832 1656544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 05:04:00.921205 1656544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 05:04:00.934493 1656544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 05:04:01.219696 1656544 ssh_runner.go:195] Run: sudo systemctl restart crio
I1209 05:05:31.557170 1656544 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.337439836s)
I1209 05:05:31.557212 1656544 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1209 05:05:31.557264 1656544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1209 05:05:31.561813 1656544 start.go:564] Will wait 60s for crictl version
I1209 05:05:31.561882 1656544 ssh_runner.go:195] Run: which crictl
I1209 05:05:31.565284 1656544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1209 05:05:31.596092 1656544 start.go:580] Version:  0.1.0
RuntimeName:  cri-o
RuntimeVersion:  1.34.3
RuntimeApiVersion:  v1
I1209 05:05:31.596174 1656544 ssh_runner.go:195] Run: crio --version
I1209 05:05:31.631795 1656544 ssh_runner.go:195] Run: crio --version
I1209 05:05:31.675160 1656544 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
I1209 05:05:31.678113 1656544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 05:05:31.753932 1656544 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:05:31.743881306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 05:05:31.754108 1656544 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 05:05:31.780833 1656544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I1209 05:05:31.785192 1656544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 05:05:31.797716 1656544 mustload.go:66] Loading cluster: ha-634473
I1209 05:05:31.797983 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 05:05:31.798258 1656544 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
I1209 05:05:31.819964 1656544 host.go:66] Checking if "ha-634473" exists ...
I1209 05:05:31.820417 1656544 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473 for IP: 192.168.49.3
I1209 05:05:31.820430 1656544 certs.go:195] generating shared ca certs ...
I1209 05:05:31.820445 1656544 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 05:05:31.820673 1656544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
I1209 05:05:31.820724 1656544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
I1209 05:05:31.820732 1656544 certs.go:257] generating profile certs ...
I1209 05:05:31.820815 1656544 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key
I1209 05:05:31.820842 1656544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb
I1209 05:05:31.820855 1656544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
I1209 05:05:32.303744 1656544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb ...
I1209 05:05:32.303778 1656544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb: {Name:mkf62498bdc03b83355b67cc140ab28a6436c1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 05:05:32.303966 1656544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb ...
I1209 05:05:32.303984 1656544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb: {Name:mkb2df0cf1be72f129409dd291f8dd6f4a8ddc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 05:05:32.304076 1656544 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.0021bceb -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt
I1209 05:05:32.304219 1656544 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.0021bceb -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key
I1209 05:05:32.304356 1656544 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key
I1209 05:05:32.304375 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1209 05:05:32.304391 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1209 05:05:32.304408 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1209 05:05:32.304419 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1209 05:05:32.304444 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1209 05:05:32.304460 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1209 05:05:32.304476 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1209 05:05:32.304490 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1209 05:05:32.304546 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
W1209 05:05:32.304586 1656544 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
I1209 05:05:32.304598 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
I1209 05:05:32.304627 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
I1209 05:05:32.304655 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
I1209 05:05:32.304686 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
I1209 05:05:32.304734 1656544 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
I1209 05:05:32.304772 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
I1209 05:05:32.304789 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
I1209 05:05:32.304801 1656544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1209 05:05:32.304870 1656544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
I1209 05:05:32.324285 1656544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
I1209 05:05:32.426935 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I1209 05:05:32.432249 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I1209 05:05:32.441161 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I1209 05:05:32.444967 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
I1209 05:05:32.453235 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I1209 05:05:32.456851 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I1209 05:05:32.465808 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I1209 05:05:32.470006 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I1209 05:05:32.480785 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I1209 05:05:32.484606 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I1209 05:05:32.493895 1656544 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I1209 05:05:32.497815 1656544 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I1209 05:05:32.516330 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 05:05:32.537129 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1209 05:05:32.557210 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 05:05:32.577761 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1209 05:05:32.599181 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I1209 05:05:32.621075 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1209 05:05:32.641425 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 05:05:32.661666 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1209 05:05:32.684550 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
I1209 05:05:32.705359 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
I1209 05:05:32.726767 1656544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 05:05:32.746959 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I1209 05:05:32.764178 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
I1209 05:05:32.779337 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I1209 05:05:32.793734 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I1209 05:05:32.814444 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I1209 05:05:32.829256 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I1209 05:05:32.843106 1656544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I1209 05:05:32.856844 1656544 ssh_runner.go:195] Run: openssl version
I1209 05:05:32.863844 1656544 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1209 05:05:32.872354 1656544 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1209 05:05:32.880202 1656544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 05:05:32.884261 1656544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
I1209 05:05:32.884355 1656544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 05:05:32.928566 1656544 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1209 05:05:32.936686 1656544 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
I1209 05:05:32.944773 1656544 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
I1209 05:05:32.953219 1656544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
I1209 05:05:32.957805 1656544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
I1209 05:05:32.957881 1656544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
I1209 05:05:33.000409 1656544 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1209 05:05:33.010898 1656544 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
I1209 05:05:33.019893 1656544 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
I1209 05:05:33.029698 1656544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
I1209 05:05:33.033720 1656544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
I1209 05:05:33.033884 1656544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
I1209 05:05:33.076130 1656544 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1209 05:05:33.084255 1656544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 05:05:33.088219 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1209 05:05:33.130734 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1209 05:05:33.172502 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1209 05:05:33.214148 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1209 05:05:33.263175 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1209 05:05:33.327359 1656544 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1209 05:05:33.401807 1656544 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
I1209 05:05:33.402004 1656544 kubeadm.go:947] kubelet [Unit]
Wants=crio.service

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-634473-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 05:05:33.402071 1656544 kube-vip.go:115] generating kube-vip config ...
I1209 05:05:33.402156 1656544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
I1209 05:05:33.424236 1656544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
stdout:

                                                
                                                
stderr:
I1209 05:05:33.424391 1656544 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.49.254
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v1.0.2
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I1209 05:05:33.424484 1656544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1209 05:05:33.438843 1656544 binaries.go:51] Found k8s binaries, skipping transfer
I1209 05:05:33.438962 1656544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I1209 05:05:33.449469 1656544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I1209 05:05:33.468645 1656544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1209 05:05:33.488036 1656544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
I1209 05:05:33.516514 1656544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
I1209 05:05:33.521687 1656544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 05:05:33.538143 1656544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 05:05:33.782180 1656544 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 05:05:33.800950 1656544 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1209 05:05:33.801130 1656544 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1209 05:05:33.801287 1656544 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 05:05:33.804721 1656544 out.go:179] * Verifying Kubernetes components...
I1209 05:05:33.804820 1656544 out.go:179] * Enabled addons: 
I1209 05:05:33.808603 1656544 addons.go:530] duration metric: took 7.468135ms for enable addons: enabled=[]
I1209 05:05:33.808702 1656544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 05:05:34.059643 1656544 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 05:05:34.084955 1656544 kapi.go:59] client config for ha-634473: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W1209 05:05:34.085123 1656544 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
I1209 05:05:34.085721 1656544 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1209 05:05:34.085765 1656544 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
I1209 05:05:34.085809 1656544 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1209 05:05:34.085859 1656544 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1209 05:05:34.085891 1656544 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1209 05:05:34.085915 1656544 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1209 05:05:34.086248 1656544 node_ready.go:35] waiting up to 6m0s for node "ha-634473-m02" to be "Ready" ...
W1209 05:05:36.103651 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:38.590482 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:41.090270 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:43.591188 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:46.090541 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:48.090696 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:50.590421 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:53.090228 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:55.090848 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:57.590322 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:05:59.591829 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:02.090334 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:04.590162 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:07.089973 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:09.090315 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:11.589951 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:13.590812 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:15.592504 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:18.090610 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:20.590772 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:23.090837 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:25.590759 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:28.090621 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:30.091664 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:32.592290 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:35.090015 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:37.590450 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:39.591042 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:42.094994 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:44.590034 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:46.590090 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:48.590513 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:51.089861 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:53.090622 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:55.590916 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:06:58.090796 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:00.113034 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:02.590904 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:05.089809 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:07.090521 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:09.592281 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:12.090181 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:14.091213 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:16.590051 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:18.590762 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:21.090500 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:23.590507 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:25.600190 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:28.090563 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:30.590868 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:33.090093 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:35.091497 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:37.590538 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:40.089723 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:42.090511 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:44.590357 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:47.089837 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:49.090700 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:51.091091 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:53.589912 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:55.590458 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:07:58.090913 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:00.135445 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:02.589896 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:04.590245 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:07.090170 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:09.090895 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:11.091065 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:13.590732 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:16.090642 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:18.090799 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:20.590842 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:23.089800 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:25.091440 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:27.591306 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:30.091240 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:32.590458 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:34.590739 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:37.091066 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:39.590078 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:41.590161 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:44.090115 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:46.090227 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:48.590884 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:51.090772 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:53.590503 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:56.090744 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:08:58.090867 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:00.106676 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:02.592047 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:05.090410 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:07.590683 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:10.090393 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:12.590255 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:14.595109 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:17.090567 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:19.091670 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:21.591661 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:24.090547 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:26.094243 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:28.590556 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:30.591037 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:33.090610 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:35.591096 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:38.090878 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:40.590542 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:42.590796 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:45.107215 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:47.592627 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:50.091275 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:52.594090 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:55.090645 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:57.090842 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:09:59.590649 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:02.090946 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:04.091223 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:06.589904 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:08.590474 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:11.090938 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:13.590021 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:15.590473 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:18.090505 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:20.091541 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:22.590204 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:25.091859 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:27.589626 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:29.590247 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:32.090311 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:34.090638 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:36.590552 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:38.590781 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:41.089835 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:43.089984 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:45.092620 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:47.590471 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:50.090359 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:52.091099 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:54.595591 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:57.090552 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:10:59.589607 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:01.591105 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:04.092654 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:06.589996 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:08.590738 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:10.591222 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:13.091038 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:15.091178 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:17.589989 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:19.591033 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:22.090902 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:24.589955 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:27.090062 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:29.590406 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
W1209 05:11:32.091164 1656544 node_ready.go:57] node "ha-634473-m02" has "Ready":"Unknown" status (will retry)
I1209 05:11:34.086991 1656544 node_ready.go:38] duration metric: took 6m0.000689641s for node "ha-634473-m02" to be "Ready" ...
I1209 05:11:34.089993 1656544 out.go:203] 
W1209 05:11:34.092799 1656544 out.go:285] X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
X Exiting due to GUEST_NODE_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
W1209 05:11:34.092825 1656544 out.go:285] * 
* 
W1209 05:11:34.101276 1656544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 05:11:34.104378 1656544 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-arm64 -p ha-634473 node start m02 --alsologtostderr -v 5": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (1.064827327s)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:11:34.241540 1658476 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:11:34.241831 1658476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:34.241871 1658476 out.go:374] Setting ErrFile to fd 2...
	I1209 05:11:34.241892 1658476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:34.242172 1658476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:11:34.242405 1658476 out.go:368] Setting JSON to false
	I1209 05:11:34.242470 1658476 mustload.go:66] Loading cluster: ha-634473
	I1209 05:11:34.242562 1658476 notify.go:221] Checking for updates...
	I1209 05:11:34.243194 1658476 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:11:34.243239 1658476 status.go:174] checking status of ha-634473 ...
	I1209 05:11:34.244139 1658476 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:11:34.267519 1658476 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:11:34.267541 1658476 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:34.268000 1658476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:11:34.306746 1658476 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:34.307151 1658476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:34.307207 1658476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:11:34.329603 1658476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:11:34.444716 1658476 ssh_runner.go:195] Run: systemctl --version
	I1209 05:11:34.452112 1658476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:34.466879 1658476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:11:34.542042 1658476 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:11:34.53266955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:11:34.542647 1658476 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:34.542689 1658476 api_server.go:166] Checking apiserver status ...
	I1209 05:11:34.542734 1658476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:34.554736 1658476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:11:34.563306 1658476 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:11:34.563374 1658476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:11:34.571227 1658476 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:34.571258 1658476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:34.581109 1658476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:34.581147 1658476 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:11:34.581179 1658476 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:34.581203 1658476 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:11:34.581538 1658476 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:11:34.600243 1658476 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:11:34.600270 1658476 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:34.600606 1658476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:11:34.628683 1658476 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:34.629012 1658476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:34.629056 1658476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:11:34.650524 1658476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:11:34.760341 1658476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:34.777376 1658476 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:34.777405 1658476 api_server.go:166] Checking apiserver status ...
	I1209 05:11:34.777459 1658476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:11:34.788644 1658476 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:11:34.788669 1658476 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:11:34.788679 1658476 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:34.788696 1658476 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:11:34.789021 1658476 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:11:34.822868 1658476 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:11:34.822900 1658476 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:34.823208 1658476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:11:34.840367 1658476 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:34.840685 1658476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:34.840739 1658476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:11:34.861606 1658476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:11:34.971984 1658476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:34.986678 1658476 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:34.986759 1658476 api_server.go:166] Checking apiserver status ...
	I1209 05:11:34.986830 1658476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:34.999348 1658476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:11:35.022761 1658476 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:11:35.022845 1658476 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:11:35.031998 1658476 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:35.032031 1658476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:35.040999 1658476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:35.041043 1658476 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:11:35.041054 1658476 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:35.041096 1658476 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:11:35.041468 1658476 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:11:35.059522 1658476 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:11:35.059552 1658476 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:35.059859 1658476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:11:35.078176 1658476 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:35.078552 1658476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:35.078713 1658476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:11:35.098224 1658476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:11:35.204577 1658476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:35.219557 1658476 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:11:35.229657 1580521 retry.go:31] will retry after 1.254012931s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (1.01305319s)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:11:36.527009 1658667 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:11:36.527631 1658667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:36.527670 1658667 out.go:374] Setting ErrFile to fd 2...
	I1209 05:11:36.527693 1658667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:36.528111 1658667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:11:36.528536 1658667 out.go:368] Setting JSON to false
	I1209 05:11:36.528611 1658667 mustload.go:66] Loading cluster: ha-634473
	I1209 05:11:36.529382 1658667 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:11:36.529427 1658667 status.go:174] checking status of ha-634473 ...
	I1209 05:11:36.529472 1658667 notify.go:221] Checking for updates...
	I1209 05:11:36.530769 1658667 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:11:36.551545 1658667 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:11:36.551570 1658667 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:36.551891 1658667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:11:36.581371 1658667 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:36.581697 1658667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:36.581742 1658667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:11:36.600922 1658667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:11:36.724604 1658667 ssh_runner.go:195] Run: systemctl --version
	I1209 05:11:36.732290 1658667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:36.747405 1658667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:11:36.835392 1658667 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:11:36.82548905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:11:36.835966 1658667 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:36.835998 1658667 api_server.go:166] Checking apiserver status ...
	I1209 05:11:36.836053 1658667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:36.848690 1658667 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:11:36.857736 1658667 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:11:36.857813 1658667 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:11:36.866885 1658667 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:36.866916 1658667 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:36.876877 1658667 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:36.876908 1658667 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:11:36.876919 1658667 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:36.876964 1658667 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:11:36.877357 1658667 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:11:36.897093 1658667 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:11:36.897115 1658667 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:36.897421 1658667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:11:36.922893 1658667 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:36.923267 1658667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:36.923324 1658667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:11:36.942536 1658667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:11:37.052674 1658667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:37.066952 1658667 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:37.066985 1658667 api_server.go:166] Checking apiserver status ...
	I1209 05:11:37.067027 1658667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:11:37.078385 1658667 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:11:37.078410 1658667 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:11:37.078420 1658667 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:37.078437 1658667 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:11:37.078868 1658667 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:11:37.098555 1658667 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:11:37.098664 1658667 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:37.099019 1658667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:11:37.126846 1658667 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:37.127187 1658667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:37.127234 1658667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:11:37.145675 1658667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:11:37.252223 1658667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:37.267240 1658667 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:37.267269 1658667 api_server.go:166] Checking apiserver status ...
	I1209 05:11:37.267319 1658667 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:37.278790 1658667 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:11:37.287221 1658667 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:11:37.287294 1658667 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:11:37.295726 1658667 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:37.295764 1658667 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:37.304389 1658667 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:37.304470 1658667 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:11:37.304495 1658667 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:37.304537 1658667 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:11:37.304888 1658667 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:11:37.322985 1658667 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:11:37.323009 1658667 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:37.323338 1658667 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:11:37.340590 1658667 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:37.340909 1658667 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:37.340947 1658667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:11:37.367217 1658667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:11:37.476443 1658667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:37.489150 1658667 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:11:37.497780 1580521 retry.go:31] will retry after 1.022503415s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (989.763584ms)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:11:38.572781 1658854 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:11:38.572965 1658854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:38.572995 1658854 out.go:374] Setting ErrFile to fd 2...
	I1209 05:11:38.573016 1658854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:38.573310 1658854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:11:38.573529 1658854 out.go:368] Setting JSON to false
	I1209 05:11:38.573598 1658854 mustload.go:66] Loading cluster: ha-634473
	I1209 05:11:38.573675 1658854 notify.go:221] Checking for updates...
	I1209 05:11:38.575303 1658854 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:11:38.575361 1658854 status.go:174] checking status of ha-634473 ...
	I1209 05:11:38.577885 1658854 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:11:38.598165 1658854 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:11:38.598185 1658854 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:38.598502 1658854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:11:38.633140 1658854 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:38.633454 1658854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:38.633502 1658854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:11:38.652346 1658854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:11:38.760524 1658854 ssh_runner.go:195] Run: systemctl --version
	I1209 05:11:38.769456 1658854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:38.784845 1658854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:11:38.864267 1658854 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:11:38.85309579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:11:38.864865 1658854 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:38.864908 1658854 api_server.go:166] Checking apiserver status ...
	I1209 05:11:38.864961 1658854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:38.877907 1658854 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:11:38.888122 1658854 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:11:38.888210 1658854 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:11:38.897467 1658854 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:38.897502 1658854 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:38.906071 1658854 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:38.906106 1658854 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:11:38.906119 1658854 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:38.906147 1658854 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:11:38.906469 1658854 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:11:38.924898 1658854 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:11:38.924924 1658854 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:38.925231 1658854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:11:38.944595 1658854 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:38.944956 1658854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:38.945006 1658854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:11:38.963677 1658854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:11:39.072204 1658854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:39.087821 1658854 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:39.087857 1658854 api_server.go:166] Checking apiserver status ...
	I1209 05:11:39.087916 1658854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:11:39.098254 1658854 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:11:39.098281 1658854 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:11:39.098291 1658854 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:39.098307 1658854 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:11:39.098656 1658854 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:11:39.116047 1658854 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:11:39.116074 1658854 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:39.116380 1658854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:11:39.133587 1658854 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:39.134013 1658854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:39.134065 1658854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:11:39.152085 1658854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:11:39.260433 1658854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:39.274186 1658854 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:39.274216 1658854 api_server.go:166] Checking apiserver status ...
	I1209 05:11:39.274266 1658854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:39.286685 1658854 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:11:39.295858 1658854 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:11:39.295925 1658854 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:11:39.304456 1658854 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:39.304500 1658854 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:39.318951 1658854 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:39.319029 1658854 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:11:39.319053 1658854 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:39.319097 1658854 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:11:39.319482 1658854 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:11:39.337464 1658854 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:11:39.337485 1658854 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:39.337893 1658854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:11:39.360280 1658854 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:39.360584 1658854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:39.360620 1658854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:11:39.381518 1658854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:11:39.488055 1658854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:39.502027 1658854 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:11:39.511524 1580521 retry.go:31] will retry after 1.241129374s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (999.183078ms)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:11:40.819827 1659040 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:11:40.820117 1659040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:40.820166 1659040 out.go:374] Setting ErrFile to fd 2...
	I1209 05:11:40.820189 1659040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:40.820529 1659040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:11:40.820847 1659040 out.go:368] Setting JSON to false
	I1209 05:11:40.820924 1659040 mustload.go:66] Loading cluster: ha-634473
	I1209 05:11:40.821032 1659040 notify.go:221] Checking for updates...
	I1209 05:11:40.821498 1659040 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:11:40.821541 1659040 status.go:174] checking status of ha-634473 ...
	I1209 05:11:40.822841 1659040 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:11:40.843874 1659040 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:11:40.843916 1659040 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:40.844271 1659040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:11:40.869030 1659040 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:40.869347 1659040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:40.869410 1659040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:11:40.900130 1659040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:11:41.018817 1659040 ssh_runner.go:195] Run: systemctl --version
	I1209 05:11:41.028352 1659040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:41.045495 1659040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:11:41.106829 1659040 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:11:41.096110491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:11:41.107399 1659040 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:41.107436 1659040 api_server.go:166] Checking apiserver status ...
	I1209 05:11:41.107487 1659040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:41.120433 1659040 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:11:41.129833 1659040 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:11:41.129918 1659040 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:11:41.138566 1659040 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:41.138617 1659040 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:41.147777 1659040 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:41.147809 1659040 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:11:41.147820 1659040 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:41.147881 1659040 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:11:41.148301 1659040 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:11:41.165852 1659040 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:11:41.165879 1659040 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:41.166189 1659040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:11:41.183433 1659040 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:41.183748 1659040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:41.183792 1659040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:11:41.202873 1659040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:11:41.314370 1659040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:41.328047 1659040 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:41.328081 1659040 api_server.go:166] Checking apiserver status ...
	I1209 05:11:41.328138 1659040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:11:41.338800 1659040 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:11:41.338827 1659040 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:11:41.338837 1659040 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:41.338864 1659040 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:11:41.339179 1659040 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:11:41.362113 1659040 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:11:41.362149 1659040 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:41.362459 1659040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:11:41.380862 1659040 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:41.381193 1659040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:41.381239 1659040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:11:41.402855 1659040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:11:41.508497 1659040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:41.523473 1659040 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:41.523503 1659040 api_server.go:166] Checking apiserver status ...
	I1209 05:11:41.523548 1659040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:41.535823 1659040 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:11:41.544418 1659040 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:11:41.544540 1659040 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:11:41.553420 1659040 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:41.553448 1659040 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:41.563504 1659040 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:41.563535 1659040 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:11:41.563544 1659040 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:41.563561 1659040 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:11:41.563905 1659040 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:11:41.581115 1659040 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:11:41.581138 1659040 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:41.581480 1659040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:11:41.602762 1659040 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:41.603431 1659040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:41.603495 1659040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:11:41.623650 1659040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:11:41.732424 1659040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:41.746316 1659040 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:11:41.753456 1580521 retry.go:31] will retry after 4.756523021s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (1.011884774s)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:11:46.563076 1659223 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:11:46.563194 1659223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:46.563204 1659223 out.go:374] Setting ErrFile to fd 2...
	I1209 05:11:46.563211 1659223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:46.563513 1659223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:11:46.563702 1659223 out.go:368] Setting JSON to false
	I1209 05:11:46.563737 1659223 mustload.go:66] Loading cluster: ha-634473
	I1209 05:11:46.564144 1659223 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:11:46.564162 1659223 status.go:174] checking status of ha-634473 ...
	I1209 05:11:46.564675 1659223 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:11:46.564940 1659223 notify.go:221] Checking for updates...
	I1209 05:11:46.585518 1659223 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:11:46.585545 1659223 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:46.585859 1659223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:11:46.611269 1659223 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:46.611583 1659223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:46.611645 1659223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:11:46.637730 1659223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:11:46.756588 1659223 ssh_runner.go:195] Run: systemctl --version
	I1209 05:11:46.764064 1659223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:46.781965 1659223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:11:46.850813 1659223 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:11:46.840225224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:11:46.851403 1659223 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:46.851441 1659223 api_server.go:166] Checking apiserver status ...
	I1209 05:11:46.851489 1659223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:46.864837 1659223 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:11:46.873693 1659223 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:11:46.873775 1659223 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:11:46.881852 1659223 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:46.881885 1659223 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:46.897425 1659223 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:46.897499 1659223 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:11:46.897526 1659223 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:46.897565 1659223 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:11:46.897943 1659223 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:11:46.923461 1659223 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:11:46.923486 1659223 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:46.923787 1659223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:11:46.942689 1659223 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:46.942998 1659223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:46.943064 1659223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:11:46.965328 1659223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:11:47.072223 1659223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:47.086326 1659223 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:47.086376 1659223 api_server.go:166] Checking apiserver status ...
	I1209 05:11:47.086431 1659223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:11:47.100152 1659223 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:11:47.100175 1659223 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:11:47.100184 1659223 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:47.100202 1659223 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:11:47.100524 1659223 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:11:47.118900 1659223 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:11:47.118935 1659223 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:47.119248 1659223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:11:47.137501 1659223 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:47.137836 1659223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:47.137887 1659223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:11:47.156057 1659223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:11:47.268785 1659223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:47.282644 1659223 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:47.282678 1659223 api_server.go:166] Checking apiserver status ...
	I1209 05:11:47.282732 1659223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:47.295708 1659223 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:11:47.305763 1659223 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:11:47.305830 1659223 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:11:47.314190 1659223 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:47.314219 1659223 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:47.324451 1659223 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:47.324531 1659223 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:11:47.324554 1659223 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:47.324573 1659223 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:11:47.324908 1659223 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:11:47.346516 1659223 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:11:47.346553 1659223 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:47.346914 1659223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:11:47.367019 1659223 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:47.367327 1659223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:47.367372 1659223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:11:47.385122 1659223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:11:47.494104 1659223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:47.512919 1659223 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:11:47.523228 1580521 retry.go:31] will retry after 5.963652289s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (1.064952479s)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:11:53.540792 1659414 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:11:53.540958 1659414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:53.540964 1659414 out.go:374] Setting ErrFile to fd 2...
	I1209 05:11:53.540970 1659414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:11:53.541233 1659414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:11:53.541421 1659414 out.go:368] Setting JSON to false
	I1209 05:11:53.541463 1659414 mustload.go:66] Loading cluster: ha-634473
	I1209 05:11:53.541536 1659414 notify.go:221] Checking for updates...
	I1209 05:11:53.542845 1659414 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:11:53.542869 1659414 status.go:174] checking status of ha-634473 ...
	I1209 05:11:53.543452 1659414 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:11:53.566661 1659414 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:11:53.566687 1659414 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:53.566987 1659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:11:53.586948 1659414 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:11:53.587312 1659414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:53.587366 1659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:11:53.624051 1659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:11:53.732839 1659414 ssh_runner.go:195] Run: systemctl --version
	I1209 05:11:53.740131 1659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:53.756972 1659414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:11:53.840740 1659414 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:11:53.830351239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:11:53.841301 1659414 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:53.841345 1659414 api_server.go:166] Checking apiserver status ...
	I1209 05:11:53.841394 1659414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:53.867124 1659414 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:11:53.882056 1659414 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:11:53.882125 1659414 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:11:53.901899 1659414 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:53.901979 1659414 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:53.911848 1659414 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:53.911884 1659414 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:11:53.911909 1659414 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:53.911932 1659414 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:11:53.912335 1659414 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:11:53.942016 1659414 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:11:53.942043 1659414 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:53.942373 1659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:11:53.965143 1659414 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:11:53.965479 1659414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:53.965532 1659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:11:53.986735 1659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:11:54.104707 1659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:54.120041 1659414 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:54.120078 1659414 api_server.go:166] Checking apiserver status ...
	I1209 05:11:54.120126 1659414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:11:54.131295 1659414 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:11:54.131319 1659414 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:11:54.131330 1659414 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:54.131348 1659414 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:11:54.131669 1659414 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:11:54.153136 1659414 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:11:54.153174 1659414 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:54.153491 1659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:11:54.170749 1659414 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:11:54.171095 1659414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:54.171148 1659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:11:54.189961 1659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:11:54.296379 1659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:54.314934 1659414 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:11:54.314967 1659414 api_server.go:166] Checking apiserver status ...
	I1209 05:11:54.315012 1659414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:11:54.330493 1659414 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:11:54.341380 1659414 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:11:54.341465 1659414 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:11:54.352296 1659414 api_server.go:204] freezer state: "THAWED"
	I1209 05:11:54.352329 1659414 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:11:54.363554 1659414 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:11:54.363589 1659414 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:11:54.363607 1659414 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:11:54.363634 1659414 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:11:54.364201 1659414 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:11:54.385731 1659414 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:11:54.385765 1659414 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:54.386075 1659414 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:11:54.405426 1659414 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:11:54.405741 1659414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:11:54.405792 1659414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:11:54.424763 1659414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:11:54.532377 1659414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:11:54.545980 1659414 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:11:54.552483 1580521 retry.go:31] will retry after 10.938813874s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 2 (1.021163061s)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:12:05.543435 1659611 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:12:05.543574 1659611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:12:05.543586 1659611 out.go:374] Setting ErrFile to fd 2...
	I1209 05:12:05.543591 1659611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:12:05.543833 1659611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:12:05.544067 1659611 out.go:368] Setting JSON to false
	I1209 05:12:05.544115 1659611 mustload.go:66] Loading cluster: ha-634473
	I1209 05:12:05.544187 1659611 notify.go:221] Checking for updates...
	I1209 05:12:05.545360 1659611 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:12:05.545389 1659611 status.go:174] checking status of ha-634473 ...
	I1209 05:12:05.546056 1659611 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:12:05.563632 1659611 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:12:05.563658 1659611 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:12:05.564009 1659611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:12:05.598889 1659611 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:12:05.599260 1659611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:12:05.599319 1659611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:12:05.622909 1659611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:12:05.737803 1659611 ssh_runner.go:195] Run: systemctl --version
	I1209 05:12:05.745186 1659611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:12:05.759312 1659611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:12:05.833230 1659611 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-09 05:12:05.822236887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:12:05.833832 1659611 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:12:05.833869 1659611 api_server.go:166] Checking apiserver status ...
	I1209 05:12:05.833923 1659611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:12:05.852766 1659611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:12:05.864351 1659611 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:12:05.864426 1659611 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:12:05.873728 1659611 api_server.go:204] freezer state: "THAWED"
	I1209 05:12:05.873757 1659611 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:12:05.882458 1659611 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:12:05.882496 1659611 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:12:05.882507 1659611 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:12:05.882525 1659611 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:12:05.882901 1659611 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:12:05.907345 1659611 status.go:371] ha-634473-m02 host status = "Running" (err=<nil>)
	I1209 05:12:05.907369 1659611 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:12:05.907687 1659611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 05:12:05.930255 1659611 host.go:66] Checking if "ha-634473-m02" exists ...
	I1209 05:12:05.930671 1659611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:12:05.930724 1659611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 05:12:05.948474 1659611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 05:12:06.065337 1659611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:12:06.082021 1659611 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:12:06.082056 1659611 api_server.go:166] Checking apiserver status ...
	I1209 05:12:06.082113 1659611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1209 05:12:06.096164 1659611 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:12:06.096190 1659611 status.go:463] ha-634473-m02 apiserver status = Running (err=<nil>)
	I1209 05:12:06.096201 1659611 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:12:06.096219 1659611 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:12:06.096570 1659611 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:12:06.116929 1659611 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:12:06.116963 1659611 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:12:06.117268 1659611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:12:06.136298 1659611 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:12:06.136678 1659611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:12:06.136736 1659611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:12:06.155317 1659611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:12:06.260573 1659611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:12:06.279080 1659611 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:12:06.279117 1659611 api_server.go:166] Checking apiserver status ...
	I1209 05:12:06.279173 1659611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:12:06.295726 1659611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:12:06.305759 1659611 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:12:06.305876 1659611 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:12:06.316711 1659611 api_server.go:204] freezer state: "THAWED"
	I1209 05:12:06.316789 1659611 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:12:06.327434 1659611 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:12:06.327465 1659611 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:12:06.327475 1659611 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:12:06.327493 1659611 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:12:06.327809 1659611 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:12:06.345375 1659611 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:12:06.345397 1659611 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:12:06.345706 1659611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:12:06.371331 1659611 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:12:06.371651 1659611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:12:06.371701 1659611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:12:06.388820 1659611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:12:06.492391 1659611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:12:06.507306 1659611 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 05:12:06.513861 1580521 retry.go:31] will retry after 10.634521294s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: (1.004483052s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-634473
helpers_test.go:243: (dbg) docker inspect ha-634473:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4",
	        "Created": "2025-12-09T04:58:31.573373003Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1642404,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T04:58:31.644856964Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/hosts",
	        "LogPath": "/var/lib/docker/containers/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4-json.log",
	        "Name": "/ha-634473",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-634473:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-634473",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4",
	                "LowerDir": "/var/lib/docker/overlay2/ac036498f19029335ee3c227fe98c5bdf7685528639e3bd9ef175cc4002b5aac-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac036498f19029335ee3c227fe98c5bdf7685528639e3bd9ef175cc4002b5aac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac036498f19029335ee3c227fe98c5bdf7685528639e3bd9ef175cc4002b5aac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac036498f19029335ee3c227fe98c5bdf7685528639e3bd9ef175cc4002b5aac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-634473",
	                "Source": "/var/lib/docker/volumes/ha-634473/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-634473",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-634473",
	                "name.minikube.sigs.k8s.io": "ha-634473",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bea4d2655be18eda8365b7b6a3cecd1dcd33f54a9d306a410606ad9c18294725",
	            "SandboxKey": "/var/run/docker/netns/bea4d2655be1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34264"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-634473": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:6f:04:f8:61:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a642f98ac35748f2ae973c467fa98d08d37a805360f25f118b2f937619937ae",
	                    "EndpointID": "f26672f4177e91ed74c32541c4b35c2d2fce9ca7a873c40b8f4eedc202d7b71b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-634473",
	                        "451a940c6775"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-634473 -n ha-634473
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 logs -n 25: (1.518056734s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-634473 ssh -n ha-634473-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt ha-634473:/home/docker/cp-test_ha-634473-m03_ha-634473.txt                       │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473 sudo cat /home/docker/cp-test_ha-634473-m03_ha-634473.txt                                                 │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt ha-634473-m02:/home/docker/cp-test_ha-634473-m03_ha-634473-m02.txt               │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m02 sudo cat /home/docker/cp-test_ha-634473-m03_ha-634473-m02.txt                                         │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt ha-634473-m04:/home/docker/cp-test_ha-634473-m03_ha-634473-m04.txt               │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m04 sudo cat /home/docker/cp-test_ha-634473-m03_ha-634473-m04.txt                                         │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp testdata/cp-test.txt ha-634473-m04:/home/docker/cp-test.txt                                                             │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2832400682/001/cp-test_ha-634473-m04.txt │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt ha-634473:/home/docker/cp-test_ha-634473-m04_ha-634473.txt                       │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473 sudo cat /home/docker/cp-test_ha-634473-m04_ha-634473.txt                                                 │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt ha-634473-m02:/home/docker/cp-test_ha-634473-m04_ha-634473-m02.txt               │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m02 sudo cat /home/docker/cp-test_ha-634473-m04_ha-634473-m02.txt                                         │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ cp      │ ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt ha-634473-m03:/home/docker/cp-test_ha-634473-m04_ha-634473-m03.txt               │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ ssh     │ ha-634473 ssh -n ha-634473-m03 sudo cat /home/docker/cp-test_ha-634473-m04_ha-634473-m03.txt                                         │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ node    │ ha-634473 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │ 09 Dec 25 05:03 UTC │
	│ node    │ ha-634473 node start m02 --alsologtostderr -v 5                                                                                      │ ha-634473 │ jenkins │ v1.37.0 │ 09 Dec 25 05:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:58:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:58:26.536004 1642009 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:58:26.536215 1642009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:58:26.536246 1642009 out.go:374] Setting ErrFile to fd 2...
	I1209 04:58:26.536268 1642009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:58:26.536542 1642009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:58:26.536989 1642009 out.go:368] Setting JSON to false
	I1209 04:58:26.537846 1642009 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":34847,"bootTime":1765221460,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:58:26.537937 1642009 start.go:143] virtualization:  
	I1209 04:58:26.544268 1642009 out.go:179] * [ha-634473] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:58:26.547845 1642009 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:58:26.547942 1642009 notify.go:221] Checking for updates...
	I1209 04:58:26.554464 1642009 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:58:26.557681 1642009 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:58:26.560822 1642009 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:58:26.563885 1642009 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:58:26.566858 1642009 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:58:26.570000 1642009 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:58:26.596964 1642009 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:58:26.597126 1642009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:58:26.655461 1642009 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-09 04:58:26.645849658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:58:26.655566 1642009 docker.go:319] overlay module found
	I1209 04:58:26.658817 1642009 out.go:179] * Using the docker driver based on user configuration
	I1209 04:58:26.661739 1642009 start.go:309] selected driver: docker
	I1209 04:58:26.661758 1642009 start.go:927] validating driver "docker" against <nil>
	I1209 04:58:26.661772 1642009 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:58:26.662528 1642009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:58:26.717157 1642009 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-09 04:58:26.707679931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:58:26.717315 1642009 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:58:26.717568 1642009 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 04:58:26.720670 1642009 out.go:179] * Using Docker driver with root privileges
	I1209 04:58:26.723619 1642009 cni.go:84] Creating CNI manager for ""
	I1209 04:58:26.723693 1642009 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 04:58:26.723713 1642009 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:58:26.723797 1642009 start.go:353] cluster config:
	{Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1209 04:58:26.727085 1642009 out.go:179] * Starting "ha-634473" primary control-plane node in "ha-634473" cluster
	I1209 04:58:26.729947 1642009 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:58:26.732859 1642009 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:58:26.735674 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:58:26.735720 1642009 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 04:58:26.735733 1642009 cache.go:65] Caching tarball of preloaded images
	I1209 04:58:26.735752 1642009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:58:26.735814 1642009 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:58:26.735833 1642009 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 04:58:26.736174 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 04:58:26.736204 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json: {Name:mke3cdfc757e5a42a0ca18d2c5a597092a09cf8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:26.754658 1642009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:58:26.754683 1642009 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:58:26.754699 1642009 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:58:26.754730 1642009 start.go:360] acquireMachinesLock for ha-634473: {Name:mk48f2d3177db8cc2658da6510147485a41df001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:58:26.754835 1642009 start.go:364] duration metric: took 85.974µs to acquireMachinesLock for "ha-634473"
	I1209 04:58:26.754868 1642009 start.go:93] Provisioning new machine with config: &{Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:58:26.754939 1642009 start.go:125] createHost starting for "" (driver="docker")
	I1209 04:58:26.758285 1642009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 04:58:26.758511 1642009 start.go:159] libmachine.API.Create for "ha-634473" (driver="docker")
	I1209 04:58:26.758548 1642009 client.go:173] LocalClient.Create starting
	I1209 04:58:26.758671 1642009 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem
	I1209 04:58:26.758712 1642009 main.go:143] libmachine: Decoding PEM data...
	I1209 04:58:26.758732 1642009 main.go:143] libmachine: Parsing certificate...
	I1209 04:58:26.758807 1642009 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem
	I1209 04:58:26.758829 1642009 main.go:143] libmachine: Decoding PEM data...
	I1209 04:58:26.758845 1642009 main.go:143] libmachine: Parsing certificate...
	I1209 04:58:26.759230 1642009 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 04:58:26.774531 1642009 cli_runner.go:211] docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 04:58:26.774637 1642009 network_create.go:284] running [docker network inspect ha-634473] to gather additional debugging logs...
	I1209 04:58:26.774668 1642009 cli_runner.go:164] Run: docker network inspect ha-634473
	W1209 04:58:26.790731 1642009 cli_runner.go:211] docker network inspect ha-634473 returned with exit code 1
	I1209 04:58:26.790765 1642009 network_create.go:287] error running [docker network inspect ha-634473]: docker network inspect ha-634473: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-634473 not found
	I1209 04:58:26.790780 1642009 network_create.go:289] output of [docker network inspect ha-634473]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-634473 not found
	
	** /stderr **
	I1209 04:58:26.790874 1642009 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:58:26.806807 1642009 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001920b60}
	I1209 04:58:26.806855 1642009 network_create.go:124] attempt to create docker network ha-634473 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 04:58:26.806922 1642009 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-634473 ha-634473
	I1209 04:58:26.870870 1642009 network_create.go:108] docker network ha-634473 192.168.49.0/24 created
	I1209 04:58:26.870907 1642009 kic.go:121] calculated static IP "192.168.49.2" for the "ha-634473" container
	I1209 04:58:26.870998 1642009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 04:58:26.884596 1642009 cli_runner.go:164] Run: docker volume create ha-634473 --label name.minikube.sigs.k8s.io=ha-634473 --label created_by.minikube.sigs.k8s.io=true
	I1209 04:58:26.908762 1642009 oci.go:103] Successfully created a docker volume ha-634473
	I1209 04:58:26.908849 1642009 cli_runner.go:164] Run: docker run --rm --name ha-634473-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-634473 --entrypoint /usr/bin/test -v ha-634473:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 04:58:27.450320 1642009 oci.go:107] Successfully prepared a docker volume ha-634473
	I1209 04:58:27.450393 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:58:27.450412 1642009 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 04:58:27.450480 1642009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-634473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 04:58:31.505825 1642009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-634473:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (4.055304489s)
	I1209 04:58:31.505861 1642009 kic.go:203] duration metric: took 4.055446184s to extract preloaded images to volume ...
	W1209 04:58:31.506000 1642009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 04:58:31.506110 1642009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 04:58:31.558811 1642009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-634473 --name ha-634473 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-634473 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-634473 --network ha-634473 --ip 192.168.49.2 --volume ha-634473:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 04:58:31.869241 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Running}}
	I1209 04:58:31.893531 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:58:31.918464 1642009 cli_runner.go:164] Run: docker exec ha-634473 stat /var/lib/dpkg/alternatives/iptables
	I1209 04:58:31.970205 1642009 oci.go:144] the created container "ha-634473" has a running status.
	I1209 04:58:31.970242 1642009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa...
	I1209 04:58:32.664950 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1209 04:58:32.665003 1642009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 04:58:32.685928 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:58:32.704976 1642009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 04:58:32.705003 1642009 kic_runner.go:114] Args: [docker exec --privileged ha-634473 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 04:58:32.746318 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:58:32.763161 1642009 machine.go:94] provisionDockerMachine start ...
	I1209 04:58:32.763249 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:32.779308 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:58:32.779694 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1209 04:58:32.779710 1642009 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:58:32.780306 1642009 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55882->127.0.0.1:34260: read: connection reset by peer
	I1209 04:58:35.938161 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473
	
	I1209 04:58:35.938186 1642009 ubuntu.go:182] provisioning hostname "ha-634473"
	I1209 04:58:35.938257 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:35.956270 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:58:35.956586 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1209 04:58:35.956601 1642009 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-634473 && echo "ha-634473" | sudo tee /etc/hostname
	I1209 04:58:36.128688 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473
	
	I1209 04:58:36.128821 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:36.147336 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:58:36.147661 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1209 04:58:36.147684 1642009 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-634473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-634473/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-634473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:58:36.298807 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:58:36.298837 1642009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:58:36.298867 1642009 ubuntu.go:190] setting up certificates
	I1209 04:58:36.298882 1642009 provision.go:84] configureAuth start
	I1209 04:58:36.298940 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 04:58:36.316423 1642009 provision.go:143] copyHostCerts
	I1209 04:58:36.316470 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:58:36.316506 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:58:36.316518 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:58:36.316598 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:58:36.316691 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:58:36.316713 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:58:36.316718 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:58:36.316744 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:58:36.316799 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:58:36.316824 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:58:36.316834 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:58:36.316865 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:58:36.316919 1642009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.ha-634473 san=[127.0.0.1 192.168.49.2 ha-634473 localhost minikube]
	I1209 04:58:36.689444 1642009 provision.go:177] copyRemoteCerts
	I1209 04:58:36.689539 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:58:36.689604 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:36.707109 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:58:36.818273 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:58:36.818359 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:58:36.835775 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:58:36.835852 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1209 04:58:36.852922 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:58:36.852982 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:58:36.870425 1642009 provision.go:87] duration metric: took 571.517611ms to configureAuth
	I1209 04:58:36.870454 1642009 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:58:36.870662 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:58:36.870775 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:36.887002 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:58:36.887310 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34260 <nil> <nil>}
	I1209 04:58:36.887323 1642009 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:58:37.190892 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:58:37.190921 1642009 machine.go:97] duration metric: took 4.427736814s to provisionDockerMachine
	I1209 04:58:37.190930 1642009 client.go:176] duration metric: took 10.432368824s to LocalClient.Create
	I1209 04:58:37.190940 1642009 start.go:167] duration metric: took 10.432430248s to libmachine.API.Create "ha-634473"
	I1209 04:58:37.190948 1642009 start.go:293] postStartSetup for "ha-634473" (driver="docker")
	I1209 04:58:37.190957 1642009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:58:37.191024 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:58:37.191074 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:37.208637 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:58:37.318863 1642009 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:58:37.322443 1642009 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:58:37.322477 1642009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:58:37.322489 1642009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:58:37.322592 1642009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:58:37.322687 1642009 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:58:37.322705 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:58:37.322822 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 04:58:37.330104 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:58:37.348525 1642009 start.go:296] duration metric: took 157.564049ms for postStartSetup
	I1209 04:58:37.348928 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 04:58:37.366425 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 04:58:37.366774 1642009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:58:37.366846 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:37.383775 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:58:37.491655 1642009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:58:37.496444 1642009 start.go:128] duration metric: took 10.741490038s to createHost
	I1209 04:58:37.496515 1642009 start.go:83] releasing machines lock for "ha-634473", held for 10.741665572s
	I1209 04:58:37.496609 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 04:58:37.513108 1642009 ssh_runner.go:195] Run: cat /version.json
	I1209 04:58:37.513167 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:37.513418 1642009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:58:37.513478 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:58:37.539679 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:58:37.539914 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:58:37.642557 1642009 ssh_runner.go:195] Run: systemctl --version
	I1209 04:58:37.742461 1642009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:58:37.778749 1642009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:58:37.783608 1642009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:58:37.783684 1642009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:58:37.814261 1642009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 04:58:37.814282 1642009 start.go:496] detecting cgroup driver to use...
	I1209 04:58:37.814314 1642009 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:58:37.814364 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:58:37.832417 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:58:37.845006 1642009 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:58:37.845079 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:58:37.863231 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:58:37.883371 1642009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:58:38.006812 1642009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:58:38.146049 1642009 docker.go:234] disabling docker service ...
	I1209 04:58:38.146118 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:58:38.168745 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:58:38.182740 1642009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:58:38.304167 1642009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:58:38.426529 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:58:38.439870 1642009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:58:38.453413 1642009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:58:38.453524 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.462538 1642009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:58:38.462759 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.475773 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.489234 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.499341 1642009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:58:38.507779 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.516744 1642009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.530803 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:58:38.539932 1642009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:58:38.547632 1642009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:58:38.555456 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:58:38.670952 1642009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:58:38.861585 1642009 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:58:38.861698 1642009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:58:38.865392 1642009 start.go:564] Will wait 60s for crictl version
	I1209 04:58:38.865502 1642009 ssh_runner.go:195] Run: which crictl
	I1209 04:58:38.868817 1642009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:58:38.894401 1642009 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:58:38.894528 1642009 ssh_runner.go:195] Run: crio --version
	I1209 04:58:38.923048 1642009 ssh_runner.go:195] Run: crio --version
	I1209 04:58:38.959957 1642009 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 04:58:38.962845 1642009 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:58:38.978159 1642009 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:58:38.982010 1642009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:58:38.992454 1642009 kubeadm.go:884] updating cluster {Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 04:58:38.992580 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:58:38.992643 1642009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:58:39.030655 1642009 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:58:39.030733 1642009 crio.go:433] Images already preloaded, skipping extraction
	I1209 04:58:39.030828 1642009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 04:58:39.056716 1642009 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 04:58:39.056736 1642009 cache_images.go:86] Images are preloaded, skipping loading
	I1209 04:58:39.056743 1642009 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1209 04:58:39.056830 1642009 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-634473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:58:39.056917 1642009 ssh_runner.go:195] Run: crio config
	I1209 04:58:39.118832 1642009 cni.go:84] Creating CNI manager for ""
	I1209 04:58:39.118855 1642009 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 04:58:39.118884 1642009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 04:58:39.118910 1642009 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-634473 NodeName:ha-634473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 04:58:39.119036 1642009 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-634473"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 04:58:39.119060 1642009 kube-vip.go:115] generating kube-vip config ...
	I1209 04:58:39.119112 1642009 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1209 04:58:39.131722 1642009 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:58:39.131843 1642009 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1209 04:58:39.131942 1642009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 04:58:39.139899 1642009 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:58:39.140001 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1209 04:58:39.147861 1642009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1209 04:58:39.161036 1642009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 04:58:39.174498 1642009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1209 04:58:39.188303 1642009 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1209 04:58:39.201732 1642009 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1209 04:58:39.205502 1642009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:58:39.215794 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:58:39.336265 1642009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:58:39.352720 1642009 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473 for IP: 192.168.49.2
	I1209 04:58:39.352785 1642009 certs.go:195] generating shared ca certs ...
	I1209 04:58:39.352817 1642009 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.352986 1642009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:58:39.353055 1642009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:58:39.353078 1642009 certs.go:257] generating profile certs ...
	I1209 04:58:39.353162 1642009 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key
	I1209 04:58:39.353200 1642009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt with IP's: []
	I1209 04:58:39.502910 1642009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt ...
	I1209 04:58:39.502947 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt: {Name:mk1d49b88da1bbc732a37e608bf13838f3ffc5f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.503180 1642009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key ...
	I1209 04:58:39.503196 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key: {Name:mke02ae86ce80ab0b2721d3ccbd2f8a254a00403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.503299 1642009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.156d0c28
	I1209 04:58:39.503320 1642009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.156d0c28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1209 04:58:39.709459 1642009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.156d0c28 ...
	I1209 04:58:39.709492 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.156d0c28: {Name:mkb6fee74627eb472440da32adaf0e4d5a02b5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.709683 1642009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.156d0c28 ...
	I1209 04:58:39.709705 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.156d0c28: {Name:mk00168a8c0c443a7ac0952b83644e75a451a7d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.709798 1642009 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.156d0c28 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt
	I1209 04:58:39.709882 1642009 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.156d0c28 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key
	I1209 04:58:39.709945 1642009 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key
	I1209 04:58:39.709965 1642009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt with IP's: []
	I1209 04:58:39.834019 1642009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt ...
	I1209 04:58:39.834048 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt: {Name:mk857c46bacbfcb1d84528084125da8c7bd4fa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.834219 1642009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key ...
	I1209 04:58:39.834233 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key: {Name:mkc520be385ded9fa77fe3445f361e02d22b2a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:58:39.834322 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:58:39.834342 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:58:39.834359 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:58:39.834385 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:58:39.834401 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:58:39.834415 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:58:39.834430 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:58:39.834444 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:58:39.834496 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:58:39.834536 1642009 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:58:39.834549 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:58:39.834595 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:58:39.834621 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:58:39.834648 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:58:39.834698 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:58:39.834732 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:58:39.834749 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:58:39.834761 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:58:39.835275 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:58:39.855268 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:58:39.873553 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:58:39.894336 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:58:39.914364 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1209 04:58:39.931984 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 04:58:39.949376 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:58:39.966633 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:58:39.984184 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:58:40.019187 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:58:40.050165 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:58:40.068929 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 04:58:40.085425 1642009 ssh_runner.go:195] Run: openssl version
	I1209 04:58:40.093371 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:58:40.106984 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:58:40.116589 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:58:40.121199 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:58:40.121310 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:58:40.170185 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:58:40.178079 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/15805212.pem /etc/ssl/certs/3ec20f2e.0
	I1209 04:58:40.186187 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:58:40.193882 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:58:40.201625 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:58:40.205449 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:58:40.205515 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:58:40.246452 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:58:40.253899 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 04:58:40.261399 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:58:40.268869 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:58:40.276623 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:58:40.280700 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:58:40.280763 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:58:40.321844 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:58:40.329075 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1580521.pem /etc/ssl/certs/51391683.0
	I1209 04:58:40.336528 1642009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:58:40.340050 1642009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 04:58:40.340151 1642009 kubeadm.go:401] StartCluster: {Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:58:40.340242 1642009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 04:58:40.340303 1642009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 04:58:40.370650 1642009 cri.go:89] found id: ""
	I1209 04:58:40.370727 1642009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 04:58:40.378635 1642009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 04:58:40.386388 1642009 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 04:58:40.386472 1642009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 04:58:40.394488 1642009 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 04:58:40.394508 1642009 kubeadm.go:158] found existing configuration files:
	
	I1209 04:58:40.394564 1642009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 04:58:40.402149 1642009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 04:58:40.402230 1642009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 04:58:40.409919 1642009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 04:58:40.418003 1642009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 04:58:40.418068 1642009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 04:58:40.425287 1642009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 04:58:40.433362 1642009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 04:58:40.433450 1642009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 04:58:40.440947 1642009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 04:58:40.448743 1642009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 04:58:40.448847 1642009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 04:58:40.456073 1642009 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 04:58:40.499837 1642009 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1209 04:58:40.500181 1642009 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 04:58:40.525015 1642009 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 04:58:40.525092 1642009 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 04:58:40.525132 1642009 kubeadm.go:319] OS: Linux
	I1209 04:58:40.525182 1642009 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 04:58:40.525235 1642009 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 04:58:40.525302 1642009 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 04:58:40.525355 1642009 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 04:58:40.525408 1642009 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 04:58:40.525460 1642009 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 04:58:40.525510 1642009 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 04:58:40.525564 1642009 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 04:58:40.525614 1642009 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 04:58:40.598916 1642009 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 04:58:40.599127 1642009 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 04:58:40.599262 1642009 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 04:58:40.606614 1642009 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 04:58:40.613475 1642009 out.go:252]   - Generating certificates and keys ...
	I1209 04:58:40.613637 1642009 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 04:58:40.613723 1642009 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 04:58:41.003146 1642009 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 04:58:41.244711 1642009 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1209 04:58:42.236795 1642009 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1209 04:58:43.219619 1642009 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1209 04:58:43.769230 1642009 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1209 04:58:43.769532 1642009 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [ha-634473 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:58:44.097263 1642009 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1209 04:58:44.097572 1642009 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [ha-634473 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 04:58:44.507523 1642009 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 04:58:45.334932 1642009 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 04:58:45.889925 1642009 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1209 04:58:45.890202 1642009 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 04:58:46.207319 1642009 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 04:58:46.697149 1642009 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 04:58:46.857734 1642009 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 04:58:47.670772 1642009 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 04:58:47.891128 1642009 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 04:58:47.891747 1642009 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 04:58:47.894239 1642009 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 04:58:47.897531 1642009 out.go:252]   - Booting up control plane ...
	I1209 04:58:47.897640 1642009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 04:58:47.897720 1642009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 04:58:47.897788 1642009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 04:58:47.913272 1642009 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 04:58:47.913387 1642009 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 04:58:47.921565 1642009 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 04:58:47.921988 1642009 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 04:58:47.922222 1642009 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 04:58:48.065044 1642009 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 04:58:48.065182 1642009 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 04:58:50.069854 1642009 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.00199409s
	I1209 04:58:50.073064 1642009 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1209 04:58:50.073174 1642009 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1209 04:58:50.073270 1642009 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1209 04:58:50.073358 1642009 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1209 04:58:54.167031 1642009 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.093137559s
	I1209 04:58:56.576122 1642009 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502245899s
	I1209 04:58:56.719491 1642009 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.646461618s
	I1209 04:58:56.760346 1642009 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 04:58:56.776056 1642009 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 04:58:56.791144 1642009 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 04:58:56.791371 1642009 kubeadm.go:319] [mark-control-plane] Marking the node ha-634473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 04:58:56.804484 1642009 kubeadm.go:319] [bootstrap-token] Using token: j90411.v4cit9pmvjao1qvy
	I1209 04:58:56.807520 1642009 out.go:252]   - Configuring RBAC rules ...
	I1209 04:58:56.807642 1642009 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 04:58:56.813270 1642009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 04:58:56.821493 1642009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 04:58:56.827594 1642009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 04:58:56.832328 1642009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 04:58:56.836349 1642009 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 04:58:57.127163 1642009 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 04:58:57.579237 1642009 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1209 04:58:58.126748 1642009 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1209 04:58:58.127918 1642009 kubeadm.go:319] 
	I1209 04:58:58.127989 1642009 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1209 04:58:58.128001 1642009 kubeadm.go:319] 
	I1209 04:58:58.128075 1642009 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1209 04:58:58.128088 1642009 kubeadm.go:319] 
	I1209 04:58:58.128126 1642009 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1209 04:58:58.128220 1642009 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 04:58:58.128274 1642009 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 04:58:58.128283 1642009 kubeadm.go:319] 
	I1209 04:58:58.128335 1642009 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1209 04:58:58.128342 1642009 kubeadm.go:319] 
	I1209 04:58:58.128387 1642009 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 04:58:58.128395 1642009 kubeadm.go:319] 
	I1209 04:58:58.128445 1642009 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1209 04:58:58.128523 1642009 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 04:58:58.128591 1642009 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 04:58:58.128597 1642009 kubeadm.go:319] 
	I1209 04:58:58.128677 1642009 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 04:58:58.128758 1642009 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1209 04:58:58.128766 1642009 kubeadm.go:319] 
	I1209 04:58:58.128846 1642009 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j90411.v4cit9pmvjao1qvy \
	I1209 04:58:58.128948 1642009 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb \
	I1209 04:58:58.128970 1642009 kubeadm.go:319] 	--control-plane 
	I1209 04:58:58.128974 1642009 kubeadm.go:319] 
	I1209 04:58:58.129054 1642009 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1209 04:58:58.129065 1642009 kubeadm.go:319] 
	I1209 04:58:58.129329 1642009 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j90411.v4cit9pmvjao1qvy \
	I1209 04:58:58.129442 1642009 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb 
	I1209 04:58:58.133982 1642009 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1209 04:58:58.134215 1642009 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 04:58:58.134325 1642009 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 04:58:58.134349 1642009 cni.go:84] Creating CNI manager for ""
	I1209 04:58:58.134361 1642009 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 04:58:58.137591 1642009 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1209 04:58:58.140438 1642009 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 04:58:58.144658 1642009 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1209 04:58:58.144683 1642009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 04:58:58.158136 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 04:58:58.462374 1642009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 04:58:58.462505 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:58:58.462615 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-634473 minikube.k8s.io/updated_at=2025_12_09T04_58_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=ha-634473 minikube.k8s.io/primary=true
	I1209 04:58:58.603376 1642009 ops.go:34] apiserver oom_adj: -16
	I1209 04:58:58.603588 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:58:59.103832 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:58:59.604289 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:00.104613 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:00.604399 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:01.104348 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:01.604434 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:02.103864 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:02.604605 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:03.103669 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:03.604645 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 04:59:03.784328 1642009 kubeadm.go:1114] duration metric: took 5.321868955s to wait for elevateKubeSystemPrivileges
	I1209 04:59:03.784357 1642009 kubeadm.go:403] duration metric: took 23.444211717s to StartCluster
	I1209 04:59:03.784375 1642009 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:59:03.784440 1642009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:59:03.785101 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:59:03.785312 1642009 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:59:03.785338 1642009 start.go:242] waiting for startup goroutines ...
	I1209 04:59:03.785349 1642009 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 04:59:03.785409 1642009 addons.go:70] Setting storage-provisioner=true in profile "ha-634473"
	I1209 04:59:03.785426 1642009 addons.go:239] Setting addon storage-provisioner=true in "ha-634473"
	I1209 04:59:03.785449 1642009 host.go:66] Checking if "ha-634473" exists ...
	I1209 04:59:03.785910 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:59:03.786085 1642009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 04:59:03.786532 1642009 addons.go:70] Setting default-storageclass=true in profile "ha-634473"
	I1209 04:59:03.786565 1642009 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "ha-634473"
	I1209 04:59:03.786918 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:59:03.787648 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:59:03.832571 1642009 kapi.go:59] client config for ha-634473: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 04:59:03.833113 1642009 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 04:59:03.833126 1642009 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 04:59:03.833131 1642009 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 04:59:03.833136 1642009 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 04:59:03.833140 1642009 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 04:59:03.833421 1642009 addons.go:239] Setting addon default-storageclass=true in "ha-634473"
	I1209 04:59:03.833448 1642009 host.go:66] Checking if "ha-634473" exists ...
	I1209 04:59:03.833875 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:59:03.834108 1642009 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1209 04:59:03.839352 1642009 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 04:59:03.846737 1642009 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:59:03.846764 1642009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 04:59:03.846847 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:59:03.866685 1642009 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 04:59:03.866715 1642009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 04:59:03.866781 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:59:03.900873 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:59:03.914233 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:59:04.044811 1642009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 04:59:04.185692 1642009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 04:59:04.196610 1642009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 04:59:04.481334 1642009 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 04:59:04.847641 1642009 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1209 04:59:04.851335 1642009 addons.go:530] duration metric: took 1.065975525s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1209 04:59:04.851399 1642009 start.go:247] waiting for cluster config update ...
	I1209 04:59:04.851414 1642009 start.go:256] writing updated cluster config ...
	I1209 04:59:04.854244 1642009 out.go:203] 
	I1209 04:59:04.857289 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:59:04.857410 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 04:59:04.860915 1642009 out.go:179] * Starting "ha-634473-m02" control-plane node in "ha-634473" cluster
	I1209 04:59:04.863753 1642009 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:59:04.866848 1642009 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:59:04.870684 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:59:04.870762 1642009 cache.go:65] Caching tarball of preloaded images
	I1209 04:59:04.870697 1642009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:59:04.870866 1642009 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 04:59:04.870884 1642009 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 04:59:04.870983 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 04:59:04.890521 1642009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:59:04.890543 1642009 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 04:59:04.890557 1642009 cache.go:243] Successfully downloaded all kic artifacts
	I1209 04:59:04.890608 1642009 start.go:360] acquireMachinesLock for ha-634473-m02: {Name:mk12a21800248c722fe299fa0c218c0fccb4ad14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 04:59:04.890728 1642009 start.go:364] duration metric: took 104.322µs to acquireMachinesLock for "ha-634473-m02"
	I1209 04:59:04.890762 1642009 start.go:93] Provisioning new machine with config: &{Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:59:04.890840 1642009 start.go:125] createHost starting for "m02" (driver="docker")
	I1209 04:59:04.894415 1642009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 04:59:04.894801 1642009 start.go:159] libmachine.API.Create for "ha-634473" (driver="docker")
	I1209 04:59:04.894832 1642009 client.go:173] LocalClient.Create starting
	I1209 04:59:04.894922 1642009 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem
	I1209 04:59:04.894976 1642009 main.go:143] libmachine: Decoding PEM data...
	I1209 04:59:04.894992 1642009 main.go:143] libmachine: Parsing certificate...
	I1209 04:59:04.895063 1642009 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem
	I1209 04:59:04.895083 1642009 main.go:143] libmachine: Decoding PEM data...
	I1209 04:59:04.895095 1642009 main.go:143] libmachine: Parsing certificate...
	I1209 04:59:04.895432 1642009 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:59:04.914233 1642009 network_create.go:77] Found existing network {name:ha-634473 subnet:0x4001b47da0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1209 04:59:04.914276 1642009 kic.go:121] calculated static IP "192.168.49.3" for the "ha-634473-m02" container
	I1209 04:59:04.914356 1642009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 04:59:04.930479 1642009 cli_runner.go:164] Run: docker volume create ha-634473-m02 --label name.minikube.sigs.k8s.io=ha-634473-m02 --label created_by.minikube.sigs.k8s.io=true
	I1209 04:59:04.948353 1642009 oci.go:103] Successfully created a docker volume ha-634473-m02
	I1209 04:59:04.948510 1642009 cli_runner.go:164] Run: docker run --rm --name ha-634473-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-634473-m02 --entrypoint /usr/bin/test -v ha-634473-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 04:59:05.571339 1642009 oci.go:107] Successfully prepared a docker volume ha-634473-m02
	I1209 04:59:05.571391 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:59:05.571403 1642009 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 04:59:05.571471 1642009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-634473-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 04:59:09.554699 1642009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-634473-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.983186064s)
	I1209 04:59:09.554738 1642009 kic.go:203] duration metric: took 3.983332201s to extract preloaded images to volume ...
	W1209 04:59:09.554881 1642009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 04:59:09.554987 1642009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 04:59:09.624322 1642009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-634473-m02 --name ha-634473-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-634473-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-634473-m02 --network ha-634473 --ip 192.168.49.3 --volume ha-634473-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 04:59:09.926322 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Running}}
	I1209 04:59:09.950345 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 04:59:09.972886 1642009 cli_runner.go:164] Run: docker exec ha-634473-m02 stat /var/lib/dpkg/alternatives/iptables
	I1209 04:59:10.041781 1642009 oci.go:144] the created container "ha-634473-m02" has a running status.
	I1209 04:59:10.041826 1642009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa...
	I1209 04:59:10.304562 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1209 04:59:10.304619 1642009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 04:59:10.333944 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 04:59:10.369394 1642009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 04:59:10.369415 1642009 kic_runner.go:114] Args: [docker exec --privileged ha-634473-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 04:59:10.440148 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 04:59:10.472487 1642009 machine.go:94] provisionDockerMachine start ...
	I1209 04:59:10.472586 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:10.499313 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:59:10.499648 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34265 <nil> <nil>}
	I1209 04:59:10.499669 1642009 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 04:59:10.500284 1642009 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59162->127.0.0.1:34265: read: connection reset by peer
	I1209 04:59:13.654247 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m02
	
	I1209 04:59:13.654274 1642009 ubuntu.go:182] provisioning hostname "ha-634473-m02"
	I1209 04:59:13.654352 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:13.671553 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:59:13.671869 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34265 <nil> <nil>}
	I1209 04:59:13.671886 1642009 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-634473-m02 && echo "ha-634473-m02" | sudo tee /etc/hostname
	I1209 04:59:13.837247 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m02
	
	I1209 04:59:13.837408 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:13.855824 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:59:13.856135 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34265 <nil> <nil>}
	I1209 04:59:13.856157 1642009 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-634473-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-634473-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-634473-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 04:59:14.015260 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 04:59:14.015360 1642009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 04:59:14.015422 1642009 ubuntu.go:190] setting up certificates
	I1209 04:59:14.015451 1642009 provision.go:84] configureAuth start
	I1209 04:59:14.015567 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 04:59:14.037596 1642009 provision.go:143] copyHostCerts
	I1209 04:59:14.037637 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:59:14.037671 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 04:59:14.037679 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 04:59:14.037757 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 04:59:14.037879 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:59:14.037896 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 04:59:14.037901 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 04:59:14.037930 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 04:59:14.037975 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:59:14.037991 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 04:59:14.037995 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 04:59:14.038018 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 04:59:14.038070 1642009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.ha-634473-m02 san=[127.0.0.1 192.168.49.3 ha-634473-m02 localhost minikube]
	I1209 04:59:14.156690 1642009 provision.go:177] copyRemoteCerts
	I1209 04:59:14.156757 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 04:59:14.156801 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:14.175231 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34265 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 04:59:14.282455 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 04:59:14.282527 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 04:59:14.307283 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 04:59:14.307348 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 04:59:14.325477 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 04:59:14.325540 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 04:59:14.345468 1642009 provision.go:87] duration metric: took 329.990113ms to configureAuth
	I1209 04:59:14.345545 1642009 ubuntu.go:206] setting minikube options for container-runtime
	I1209 04:59:14.345778 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:59:14.345929 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:14.366653 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 04:59:14.367058 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34265 <nil> <nil>}
	I1209 04:59:14.367079 1642009 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 04:59:14.675063 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 04:59:14.675087 1642009 machine.go:97] duration metric: took 4.202572847s to provisionDockerMachine
	I1209 04:59:14.675097 1642009 client.go:176] duration metric: took 9.780259382s to LocalClient.Create
	I1209 04:59:14.675126 1642009 start.go:167] duration metric: took 9.780328208s to libmachine.API.Create "ha-634473"
	I1209 04:59:14.675133 1642009 start.go:293] postStartSetup for "ha-634473-m02" (driver="docker")
	I1209 04:59:14.675147 1642009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 04:59:14.675227 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 04:59:14.675275 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:14.692643 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34265 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 04:59:14.800196 1642009 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 04:59:14.804150 1642009 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 04:59:14.804180 1642009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 04:59:14.804193 1642009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 04:59:14.804247 1642009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 04:59:14.804329 1642009 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 04:59:14.804340 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 04:59:14.804438 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 04:59:14.812349 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:59:14.830096 1642009 start.go:296] duration metric: took 154.9443ms for postStartSetup
	I1209 04:59:14.830524 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 04:59:14.846914 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 04:59:14.847199 1642009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 04:59:14.847256 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:14.863222 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34265 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 04:59:14.967470 1642009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 04:59:14.972085 1642009 start.go:128] duration metric: took 10.081228955s to createHost
	I1209 04:59:14.972109 1642009 start.go:83] releasing machines lock for "ha-634473-m02", held for 10.081364909s
	I1209 04:59:14.972193 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m02
	I1209 04:59:14.994888 1642009 out.go:179] * Found network options:
	I1209 04:59:15.007734 1642009 out.go:179]   - NO_PROXY=192.168.49.2
	W1209 04:59:15.012049 1642009 proxy.go:120] fail to check proxy env: Error ip not in block
	W1209 04:59:15.012122 1642009 proxy.go:120] fail to check proxy env: Error ip not in block
	I1209 04:59:15.012205 1642009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 04:59:15.012251 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:15.012274 1642009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 04:59:15.012738 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m02
	I1209 04:59:15.042034 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34265 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 04:59:15.046375 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34265 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m02/id_rsa Username:docker}
	I1209 04:59:15.189838 1642009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 04:59:15.257110 1642009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 04:59:15.257187 1642009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 04:59:15.286945 1642009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 04:59:15.286971 1642009 start.go:496] detecting cgroup driver to use...
	I1209 04:59:15.287004 1642009 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 04:59:15.287053 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 04:59:15.308885 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 04:59:15.323111 1642009 docker.go:218] disabling cri-docker service (if available) ...
	I1209 04:59:15.323175 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 04:59:15.342090 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 04:59:15.362716 1642009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 04:59:15.484938 1642009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 04:59:15.627477 1642009 docker.go:234] disabling docker service ...
	I1209 04:59:15.627670 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 04:59:15.650067 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 04:59:15.665326 1642009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 04:59:15.799868 1642009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 04:59:15.933571 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 04:59:15.947811 1642009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 04:59:15.961627 1642009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 04:59:15.961703 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:15.970244 1642009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 04:59:15.970316 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:15.980395 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:15.990181 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:16.000272 1642009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 04:59:16.010765 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:16.020438 1642009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:16.034809 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 04:59:16.044369 1642009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 04:59:16.052717 1642009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 04:59:16.060718 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:59:16.190228 1642009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 04:59:16.362696 1642009 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 04:59:16.362818 1642009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 04:59:16.366888 1642009 start.go:564] Will wait 60s for crictl version
	I1209 04:59:16.366976 1642009 ssh_runner.go:195] Run: which crictl
	I1209 04:59:16.370656 1642009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 04:59:16.398317 1642009 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 04:59:16.398438 1642009 ssh_runner.go:195] Run: crio --version
	I1209 04:59:16.429549 1642009 ssh_runner.go:195] Run: crio --version
	I1209 04:59:16.464252 1642009 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 04:59:16.467105 1642009 out.go:179]   - env NO_PROXY=192.168.49.2
	I1209 04:59:16.469993 1642009 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 04:59:16.486678 1642009 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 04:59:16.490563 1642009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:59:16.500874 1642009 mustload.go:66] Loading cluster: ha-634473
	I1209 04:59:16.501160 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:59:16.501515 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 04:59:16.518856 1642009 host.go:66] Checking if "ha-634473" exists ...
	I1209 04:59:16.519139 1642009 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473 for IP: 192.168.49.3
	I1209 04:59:16.519153 1642009 certs.go:195] generating shared ca certs ...
	I1209 04:59:16.519174 1642009 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:59:16.519305 1642009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 04:59:16.519352 1642009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 04:59:16.519363 1642009 certs.go:257] generating profile certs ...
	I1209 04:59:16.519441 1642009 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key
	I1209 04:59:16.519477 1642009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.c689ad79
	I1209 04:59:16.519495 1642009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.c689ad79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1209 04:59:16.677306 1642009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.c689ad79 ...
	I1209 04:59:16.677335 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.c689ad79: {Name:mk1fad0f878400798835876b369302c6faf088c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:59:16.677529 1642009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.c689ad79 ...
	I1209 04:59:16.677544 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.c689ad79: {Name:mk2aec1a0c974a03c76f6c30808f945f455356c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 04:59:16.677634 1642009 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.c689ad79 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt
	I1209 04:59:16.677772 1642009 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.c689ad79 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key
	I1209 04:59:16.677904 1642009 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key
	I1209 04:59:16.677923 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 04:59:16.677938 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 04:59:16.677950 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 04:59:16.677962 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 04:59:16.677974 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 04:59:16.677985 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 04:59:16.678004 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 04:59:16.678015 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 04:59:16.678072 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 04:59:16.678107 1642009 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 04:59:16.678120 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 04:59:16.678148 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 04:59:16.678182 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 04:59:16.678210 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 04:59:16.678259 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 04:59:16.678296 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:59:16.678312 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 04:59:16.678323 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 04:59:16.678384 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:59:16.697609 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:59:16.794896 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 04:59:16.798627 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 04:59:16.806661 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 04:59:16.810100 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 04:59:16.818658 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 04:59:16.822216 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 04:59:16.831146 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 04:59:16.834670 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 04:59:16.843065 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 04:59:16.846717 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 04:59:16.855152 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 04:59:16.858754 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 04:59:16.866874 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 04:59:16.884974 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 04:59:16.904276 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 04:59:16.921863 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 04:59:16.940017 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 04:59:16.957639 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 04:59:16.975008 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 04:59:16.998911 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 04:59:17.020790 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 04:59:17.039180 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 04:59:17.057253 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 04:59:17.075574 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 04:59:17.088129 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 04:59:17.104201 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 04:59:17.116684 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 04:59:17.129093 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 04:59:17.141828 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 04:59:17.154734 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 04:59:17.167325 1642009 ssh_runner.go:195] Run: openssl version
	I1209 04:59:17.173725 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 04:59:17.180945 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 04:59:17.188178 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 04:59:17.191744 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 04:59:17.191811 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 04:59:17.232669 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 04:59:17.240242 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1580521.pem /etc/ssl/certs/51391683.0
	I1209 04:59:17.247909 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 04:59:17.255197 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 04:59:17.262743 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 04:59:17.266498 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 04:59:17.266611 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 04:59:17.308384 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 04:59:17.316145 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/15805212.pem /etc/ssl/certs/3ec20f2e.0
	I1209 04:59:17.323442 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:59:17.330864 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 04:59:17.338449 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:59:17.341962 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:59:17.342030 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 04:59:17.388530 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 04:59:17.396378 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 04:59:17.403928 1642009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 04:59:17.407821 1642009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 04:59:17.407872 1642009 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1209 04:59:17.407960 1642009 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-634473-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 04:59:17.407985 1642009 kube-vip.go:115] generating kube-vip config ...
	I1209 04:59:17.408033 1642009 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1209 04:59:17.420361 1642009 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1209 04:59:17.420421 1642009 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 04:59:17.420492 1642009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 04:59:17.428415 1642009 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 04:59:17.428490 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 04:59:17.439205 1642009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 04:59:17.453044 1642009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 04:59:17.466204 1642009 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1209 04:59:17.479047 1642009 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1209 04:59:17.482450 1642009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 04:59:17.492023 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:59:17.606417 1642009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:59:17.627776 1642009 host.go:66] Checking if "ha-634473" exists ...
	I1209 04:59:17.628097 1642009 start.go:318] joinCluster: &{Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:59:17.628248 1642009 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 04:59:17.628317 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 04:59:17.651315 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 04:59:17.826130 1642009 start.go:344] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:59:17.826215 1642009 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8hx9xi.jtl8vkmpbbibhwem --discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-634473-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I1209 04:59:41.162547 1642009 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8hx9xi.jtl8vkmpbbibhwem --discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-634473-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (23.336310761s)
	I1209 04:59:41.162641 1642009 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1209 04:59:41.517629 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-634473-m02 minikube.k8s.io/updated_at=2025_12_09T04_59_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=ha-634473 minikube.k8s.io/primary=false
	I1209 04:59:41.631869 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-634473-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 04:59:41.747423 1642009 start.go:320] duration metric: took 24.119321866s to joinCluster
	I1209 04:59:41.747491 1642009 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 04:59:41.747740 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:59:41.750404 1642009 out.go:179] * Verifying Kubernetes components...
	I1209 04:59:41.753374 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 04:59:41.907047 1642009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 04:59:41.921847 1642009 kapi.go:59] client config for ha-634473: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 04:59:41.921930 1642009 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1209 04:59:41.922161 1642009 node_ready.go:35] waiting up to 6m0s for node "ha-634473-m02" to be "Ready" ...
	W1209 04:59:43.933283 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 04:59:46.426002 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 04:59:48.926386 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 04:59:51.425996 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 04:59:53.426469 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 04:59:55.926008 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 04:59:58.425948 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:00.534162 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:02.926044 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:05.425776 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:07.926493 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:10.426894 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:12.925708 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:14.925803 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:16.925949 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:19.426094 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:21.926069 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	W1209 05:00:23.926141 1642009 node_ready.go:57] node "ha-634473-m02" has "Ready":"False" status (will retry)
	I1209 05:00:25.926222 1642009 node_ready.go:49] node "ha-634473-m02" is "Ready"
	I1209 05:00:25.926253 1642009 node_ready.go:38] duration metric: took 44.00407078s for node "ha-634473-m02" to be "Ready" ...
	I1209 05:00:25.926268 1642009 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:00:25.926329 1642009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:00:25.938565 1642009 api_server.go:72] duration metric: took 44.191041544s to wait for apiserver process to appear ...
	I1209 05:00:25.938612 1642009 api_server.go:88] waiting for apiserver healthz status ...
	I1209 05:00:25.938642 1642009 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 05:00:25.946870 1642009 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 05:00:25.948051 1642009 api_server.go:141] control plane version: v1.34.2
	I1209 05:00:25.948072 1642009 api_server.go:131] duration metric: took 9.453312ms to wait for apiserver health ...
	I1209 05:00:25.948081 1642009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 05:00:25.953781 1642009 system_pods.go:59] 17 kube-system pods found
	I1209 05:00:25.953879 1642009 system_pods.go:61] "coredns-66bc5c9577-8gdn9" [8b616706-5f8f-4db4-b56c-3ace5945f813] Running
	I1209 05:00:25.953901 1642009 system_pods.go:61] "coredns-66bc5c9577-qrw4s" [471319b3-f124-40ec-9787-1b5eaa2bedbe] Running
	I1209 05:00:25.953939 1642009 system_pods.go:61] "etcd-ha-634473" [a29ccfb1-81a2-4f00-aff8-79e53cec8ee9] Running
	I1209 05:00:25.953966 1642009 system_pods.go:61] "etcd-ha-634473-m02" [3c82b820-82d0-4eb7-ba8e-03d9f5a871b4] Running
	I1209 05:00:25.953984 1642009 system_pods.go:61] "kindnet-5k2gt" [ec3a4877-3ce2-475c-a765-31c686a555ed] Running
	I1209 05:00:25.954002 1642009 system_pods.go:61] "kindnet-vtmtm" [1b820731-ba18-4927-b1d9-9a5c514337f4] Running
	I1209 05:00:25.954022 1642009 system_pods.go:61] "kube-apiserver-ha-634473" [c011f362-595f-4d2e-8a14-73ef15a8a48a] Running
	I1209 05:00:25.954052 1642009 system_pods.go:61] "kube-apiserver-ha-634473-m02" [136387ab-b011-4d26-adf8-1b82b3614f80] Running
	I1209 05:00:25.954080 1642009 system_pods.go:61] "kube-controller-manager-ha-634473" [c275f2a5-e0c0-458d-bc27-4ed495c4e515] Running
	I1209 05:00:25.954101 1642009 system_pods.go:61] "kube-controller-manager-ha-634473-m02" [1a7c119c-f4a3-4bf4-8e73-2444c3b52a4d] Running
	I1209 05:00:25.954121 1642009 system_pods.go:61] "kube-proxy-bbwbg" [cdf69bd0-252e-4145-849f-8fac817460f0] Running
	I1209 05:00:25.954141 1642009 system_pods.go:61] "kube-proxy-m98rs" [b453d4e6-7379-4f15-8730-9217ef931335] Running
	I1209 05:00:25.954170 1642009 system_pods.go:61] "kube-scheduler-ha-634473" [9a3e5278-8c90-490d-90c0-d26cf385145c] Running
	I1209 05:00:25.954197 1642009 system_pods.go:61] "kube-scheduler-ha-634473-m02" [a09f6822-4742-4883-9418-be4ba7f13867] Running
	I1209 05:00:25.954217 1642009 system_pods.go:61] "kube-vip-ha-634473" [d1ca5157-f1ed-46d6-84cf-2b6f46a66e90] Running
	I1209 05:00:25.954239 1642009 system_pods.go:61] "kube-vip-ha-634473-m02" [285fea86-7d95-4747-93f6-f45b3bee0509] Running
	I1209 05:00:25.954258 1642009 system_pods.go:61] "storage-provisioner" [f371a424-f103-4f41-bb24-cd91e405167f] Running
	I1209 05:00:25.954289 1642009 system_pods.go:74] duration metric: took 6.200864ms to wait for pod list to return data ...
	I1209 05:00:25.954314 1642009 default_sa.go:34] waiting for default service account to be created ...
	I1209 05:00:25.958482 1642009 default_sa.go:45] found service account: "default"
	I1209 05:00:25.958549 1642009 default_sa.go:55] duration metric: took 4.213197ms for default service account to be created ...
	I1209 05:00:25.958618 1642009 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 05:00:25.967221 1642009 system_pods.go:86] 17 kube-system pods found
	I1209 05:00:25.967256 1642009 system_pods.go:89] "coredns-66bc5c9577-8gdn9" [8b616706-5f8f-4db4-b56c-3ace5945f813] Running
	I1209 05:00:25.967264 1642009 system_pods.go:89] "coredns-66bc5c9577-qrw4s" [471319b3-f124-40ec-9787-1b5eaa2bedbe] Running
	I1209 05:00:25.967269 1642009 system_pods.go:89] "etcd-ha-634473" [a29ccfb1-81a2-4f00-aff8-79e53cec8ee9] Running
	I1209 05:00:25.967273 1642009 system_pods.go:89] "etcd-ha-634473-m02" [3c82b820-82d0-4eb7-ba8e-03d9f5a871b4] Running
	I1209 05:00:25.967277 1642009 system_pods.go:89] "kindnet-5k2gt" [ec3a4877-3ce2-475c-a765-31c686a555ed] Running
	I1209 05:00:25.967281 1642009 system_pods.go:89] "kindnet-vtmtm" [1b820731-ba18-4927-b1d9-9a5c514337f4] Running
	I1209 05:00:25.967285 1642009 system_pods.go:89] "kube-apiserver-ha-634473" [c011f362-595f-4d2e-8a14-73ef15a8a48a] Running
	I1209 05:00:25.967289 1642009 system_pods.go:89] "kube-apiserver-ha-634473-m02" [136387ab-b011-4d26-adf8-1b82b3614f80] Running
	I1209 05:00:25.967292 1642009 system_pods.go:89] "kube-controller-manager-ha-634473" [c275f2a5-e0c0-458d-bc27-4ed495c4e515] Running
	I1209 05:00:25.967297 1642009 system_pods.go:89] "kube-controller-manager-ha-634473-m02" [1a7c119c-f4a3-4bf4-8e73-2444c3b52a4d] Running
	I1209 05:00:25.967301 1642009 system_pods.go:89] "kube-proxy-bbwbg" [cdf69bd0-252e-4145-849f-8fac817460f0] Running
	I1209 05:00:25.967305 1642009 system_pods.go:89] "kube-proxy-m98rs" [b453d4e6-7379-4f15-8730-9217ef931335] Running
	I1209 05:00:25.967315 1642009 system_pods.go:89] "kube-scheduler-ha-634473" [9a3e5278-8c90-490d-90c0-d26cf385145c] Running
	I1209 05:00:25.967319 1642009 system_pods.go:89] "kube-scheduler-ha-634473-m02" [a09f6822-4742-4883-9418-be4ba7f13867] Running
	I1209 05:00:25.967325 1642009 system_pods.go:89] "kube-vip-ha-634473" [d1ca5157-f1ed-46d6-84cf-2b6f46a66e90] Running
	I1209 05:00:25.967329 1642009 system_pods.go:89] "kube-vip-ha-634473-m02" [285fea86-7d95-4747-93f6-f45b3bee0509] Running
	I1209 05:00:25.967332 1642009 system_pods.go:89] "storage-provisioner" [f371a424-f103-4f41-bb24-cd91e405167f] Running
	I1209 05:00:25.967339 1642009 system_pods.go:126] duration metric: took 8.696439ms to wait for k8s-apps to be running ...
	I1209 05:00:25.967352 1642009 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 05:00:25.967415 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:00:25.983331 1642009 system_svc.go:56] duration metric: took 15.97049ms WaitForService to wait for kubelet
	I1209 05:00:25.983408 1642009 kubeadm.go:587] duration metric: took 44.23588741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:00:25.983434 1642009 node_conditions.go:102] verifying NodePressure condition ...
	I1209 05:00:25.986747 1642009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:00:25.986781 1642009 node_conditions.go:123] node cpu capacity is 2
	I1209 05:00:25.986793 1642009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:00:25.986799 1642009 node_conditions.go:123] node cpu capacity is 2
	I1209 05:00:25.986803 1642009 node_conditions.go:105] duration metric: took 3.363761ms to run NodePressure ...
	I1209 05:00:25.986816 1642009 start.go:242] waiting for startup goroutines ...
	I1209 05:00:25.986849 1642009 start.go:256] writing updated cluster config ...
	I1209 05:00:25.990414 1642009 out.go:203] 
	I1209 05:00:25.993580 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:00:25.993718 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 05:00:25.997091 1642009 out.go:179] * Starting "ha-634473-m03" control-plane node in "ha-634473" cluster
	I1209 05:00:26.001883 1642009 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 05:00:26.005086 1642009 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:00:26.008359 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 05:00:26.008394 1642009 cache.go:65] Caching tarball of preloaded images
	I1209 05:00:26.008470 1642009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:00:26.008567 1642009 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 05:00:26.008589 1642009 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 05:00:26.008750 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 05:00:26.053528 1642009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:00:26.053557 1642009 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:00:26.053583 1642009 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:00:26.053611 1642009 start.go:360] acquireMachinesLock for ha-634473-m03: {Name:mkad7a74f07813a4918d708c010fb336357a57aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:00:26.053733 1642009 start.go:364] duration metric: took 99.546µs to acquireMachinesLock for "ha-634473-m03"
	I1209 05:00:26.053767 1642009 start.go:93] Provisioning new machine with config: &{Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fal
se kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 05:00:26.053908 1642009 start.go:125] createHost starting for "m03" (driver="docker")
	I1209 05:00:26.058010 1642009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1209 05:00:26.058175 1642009 start.go:159] libmachine.API.Create for "ha-634473" (driver="docker")
	I1209 05:00:26.058222 1642009 client.go:173] LocalClient.Create starting
	I1209 05:00:26.058324 1642009 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem
	I1209 05:00:26.058376 1642009 main.go:143] libmachine: Decoding PEM data...
	I1209 05:00:26.058398 1642009 main.go:143] libmachine: Parsing certificate...
	I1209 05:00:26.058475 1642009 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem
	I1209 05:00:26.058504 1642009 main.go:143] libmachine: Decoding PEM data...
	I1209 05:00:26.058529 1642009 main.go:143] libmachine: Parsing certificate...
	I1209 05:00:26.058883 1642009 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:00:26.076547 1642009 network_create.go:77] Found existing network {name:ha-634473 subnet:0x4002149920 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I1209 05:00:26.076590 1642009 kic.go:121] calculated static IP "192.168.49.4" for the "ha-634473-m03" container
	I1209 05:00:26.076674 1642009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 05:00:26.094772 1642009 cli_runner.go:164] Run: docker volume create ha-634473-m03 --label name.minikube.sigs.k8s.io=ha-634473-m03 --label created_by.minikube.sigs.k8s.io=true
	I1209 05:00:26.118050 1642009 oci.go:103] Successfully created a docker volume ha-634473-m03
	I1209 05:00:26.118222 1642009 cli_runner.go:164] Run: docker run --rm --name ha-634473-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-634473-m03 --entrypoint /usr/bin/test -v ha-634473-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
	I1209 05:00:26.632632 1642009 oci.go:107] Successfully prepared a docker volume ha-634473-m03
	I1209 05:00:26.632697 1642009 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 05:00:26.632712 1642009 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 05:00:26.632782 1642009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-634473-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 05:00:30.620191 1642009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ha-634473-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.987365232s)
	I1209 05:00:30.620229 1642009 kic.go:203] duration metric: took 3.987512772s to extract preloaded images to volume ...
	W1209 05:00:30.620392 1642009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 05:00:30.620506 1642009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 05:00:30.681844 1642009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-634473-m03 --name ha-634473-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-634473-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-634473-m03 --network ha-634473 --ip 192.168.49.4 --volume ha-634473-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
	I1209 05:00:31.030334 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Running}}
	I1209 05:00:31.058322 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:00:31.078052 1642009 cli_runner.go:164] Run: docker exec ha-634473-m03 stat /var/lib/dpkg/alternatives/iptables
	I1209 05:00:31.145107 1642009 oci.go:144] the created container "ha-634473-m03" has a running status.
	I1209 05:00:31.145136 1642009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa...
	I1209 05:00:31.250229 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1209 05:00:31.250280 1642009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 05:00:31.280391 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:00:31.316142 1642009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 05:00:31.316163 1642009 kic_runner.go:114] Args: [docker exec --privileged ha-634473-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 05:00:31.371874 1642009 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:00:31.398580 1642009 machine.go:94] provisionDockerMachine start ...
	I1209 05:00:31.398687 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:31.421401 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 05:00:31.421729 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34270 <nil> <nil>}
	I1209 05:00:31.421745 1642009 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:00:31.422410 1642009 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:00:34.582436 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m03
	
	I1209 05:00:34.582517 1642009 ubuntu.go:182] provisioning hostname "ha-634473-m03"
	I1209 05:00:34.582621 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:34.603072 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 05:00:34.603380 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34270 <nil> <nil>}
	I1209 05:00:34.603390 1642009 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-634473-m03 && echo "ha-634473-m03" | sudo tee /etc/hostname
	I1209 05:00:34.768450 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-634473-m03
	
	I1209 05:00:34.768570 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:34.788105 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 05:00:34.788447 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34270 <nil> <nil>}
	I1209 05:00:34.788469 1642009 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-634473-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-634473-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-634473-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:00:34.943004 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:00:34.943032 1642009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 05:00:34.943049 1642009 ubuntu.go:190] setting up certificates
	I1209 05:00:34.943058 1642009 provision.go:84] configureAuth start
	I1209 05:00:34.943124 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:00:34.962315 1642009 provision.go:143] copyHostCerts
	I1209 05:00:34.962362 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:00:34.962396 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 05:00:34.962402 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:00:34.962481 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 05:00:34.962560 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:00:34.962617 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 05:00:34.962623 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:00:34.962654 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 05:00:34.962701 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:00:34.962718 1642009 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 05:00:34.962723 1642009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:00:34.962750 1642009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 05:00:34.962797 1642009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.ha-634473-m03 san=[127.0.0.1 192.168.49.4 ha-634473-m03 localhost minikube]
	I1209 05:00:35.381469 1642009 provision.go:177] copyRemoteCerts
	I1209 05:00:35.381537 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:00:35.381583 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:35.402105 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:00:35.510672 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1209 05:00:35.510738 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:00:35.529793 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1209 05:00:35.529859 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 05:00:35.551386 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1209 05:00:35.551453 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:00:35.571820 1642009 provision.go:87] duration metric: took 628.748426ms to configureAuth
	I1209 05:00:35.571908 1642009 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:00:35.572185 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:00:35.572340 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:35.590443 1642009 main.go:143] libmachine: Using SSH client type: native
	I1209 05:00:35.590791 1642009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34270 <nil> <nil>}
	I1209 05:00:35.590805 1642009 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 05:00:35.907332 1642009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 05:00:35.907358 1642009 machine.go:97] duration metric: took 4.508756633s to provisionDockerMachine
	I1209 05:00:35.907369 1642009 client.go:176] duration metric: took 9.849136754s to LocalClient.Create
	I1209 05:00:35.907384 1642009 start.go:167] duration metric: took 9.849210642s to libmachine.API.Create "ha-634473"
	I1209 05:00:35.907391 1642009 start.go:293] postStartSetup for "ha-634473-m03" (driver="docker")
	I1209 05:00:35.907401 1642009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:00:35.907465 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:00:35.907518 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:35.929553 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:00:36.043733 1642009 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:00:36.049531 1642009 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:00:36.049562 1642009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:00:36.049573 1642009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 05:00:36.049637 1642009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 05:00:36.049722 1642009 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 05:00:36.049735 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /etc/ssl/certs/15805212.pem
	I1209 05:00:36.049847 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:00:36.059131 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:00:36.079267 1642009 start.go:296] duration metric: took 171.860908ms for postStartSetup
	I1209 05:00:36.079737 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:00:36.097836 1642009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/config.json ...
	I1209 05:00:36.098212 1642009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:00:36.098270 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:36.126700 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:00:36.231729 1642009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:00:36.236696 1642009 start.go:128] duration metric: took 10.182772474s to createHost
	I1209 05:00:36.236724 1642009 start.go:83] releasing machines lock for "ha-634473-m03", held for 10.182975834s
	I1209 05:00:36.236800 1642009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:00:36.259231 1642009 out.go:179] * Found network options:
	I1209 05:00:36.262199 1642009 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1209 05:00:36.265090 1642009 proxy.go:120] fail to check proxy env: Error ip not in block
	W1209 05:00:36.265123 1642009 proxy.go:120] fail to check proxy env: Error ip not in block
	W1209 05:00:36.265150 1642009 proxy.go:120] fail to check proxy env: Error ip not in block
	W1209 05:00:36.265171 1642009 proxy.go:120] fail to check proxy env: Error ip not in block
	I1209 05:00:36.265252 1642009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 05:00:36.265292 1642009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:00:36.265351 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:36.265295 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:00:36.285035 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:00:36.287617 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:00:36.444044 1642009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:00:36.513745 1642009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:00:36.513829 1642009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:00:36.546201 1642009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1209 05:00:36.546223 1642009 start.go:496] detecting cgroup driver to use...
	I1209 05:00:36.546267 1642009 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:00:36.546333 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 05:00:36.578660 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 05:00:36.594343 1642009 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:00:36.594409 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:00:36.622288 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:00:36.642080 1642009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:00:36.775922 1642009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:00:36.913039 1642009 docker.go:234] disabling docker service ...
	I1209 05:00:36.913109 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:00:36.945387 1642009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:00:36.962900 1642009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:00:37.124490 1642009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:00:37.269544 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:00:37.284068 1642009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:00:37.308058 1642009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 05:00:37.308130 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.318805 1642009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 05:00:37.318877 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.332182 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.341846 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.356259 1642009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:00:37.370214 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.381775 1642009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.397456 1642009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:00:37.407354 1642009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:00:37.417455 1642009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:00:37.425912 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:00:37.560066 1642009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 05:00:37.743014 1642009 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 05:00:37.743102 1642009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 05:00:37.747285 1642009 start.go:564] Will wait 60s for crictl version
	I1209 05:00:37.747377 1642009 ssh_runner.go:195] Run: which crictl
	I1209 05:00:37.751063 1642009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:00:37.781427 1642009 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 05:00:37.781515 1642009 ssh_runner.go:195] Run: crio --version
	I1209 05:00:37.811839 1642009 ssh_runner.go:195] Run: crio --version
	I1209 05:00:37.846952 1642009 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 05:00:37.849964 1642009 out.go:179]   - env NO_PROXY=192.168.49.2
	I1209 05:00:37.852913 1642009 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1209 05:00:37.855906 1642009 cli_runner.go:164] Run: docker network inspect ha-634473 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:00:37.874474 1642009 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 05:00:37.878692 1642009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:00:37.891252 1642009 mustload.go:66] Loading cluster: ha-634473
	I1209 05:00:37.891513 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:00:37.891801 1642009 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:00:37.910442 1642009 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:00:37.911033 1642009 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473 for IP: 192.168.49.4
	I1209 05:00:37.911053 1642009 certs.go:195] generating shared ca certs ...
	I1209 05:00:37.911069 1642009 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:00:37.911828 1642009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 05:00:37.911883 1642009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 05:00:37.911894 1642009 certs.go:257] generating profile certs ...
	I1209 05:00:37.911979 1642009 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key
	I1209 05:00:37.912012 1642009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.67ece5c0
	I1209 05:00:37.912025 1642009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.67ece5c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1209 05:00:38.410107 1642009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.67ece5c0 ...
	I1209 05:00:38.410136 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.67ece5c0: {Name:mkee6062396caa9443dad9800918c13c0004750c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:00:38.410340 1642009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.67ece5c0 ...
	I1209 05:00:38.410356 1642009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.67ece5c0: {Name:mk078d8e5e011f6a094cd2a463089b99a4f58607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:00:38.410450 1642009 certs.go:382] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt.67ece5c0 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt
	I1209 05:00:38.410613 1642009 certs.go:386] copying /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key.67ece5c0 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key
	I1209 05:00:38.410756 1642009 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key
	I1209 05:00:38.410777 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1209 05:00:38.410793 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1209 05:00:38.410811 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1209 05:00:38.410830 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1209 05:00:38.410842 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1209 05:00:38.410857 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1209 05:00:38.410870 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1209 05:00:38.410885 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1209 05:00:38.410942 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 05:00:38.410978 1642009 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 05:00:38.410992 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:00:38.411021 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:00:38.411051 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:00:38.411080 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 05:00:38.411129 1642009 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:00:38.411166 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:00:38.411183 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem -> /usr/share/ca-certificates/1580521.pem
	I1209 05:00:38.411194 1642009 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> /usr/share/ca-certificates/15805212.pem
	I1209 05:00:38.411254 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:00:38.429992 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:00:38.534956 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1209 05:00:38.539448 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1209 05:00:38.548678 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1209 05:00:38.552716 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1209 05:00:38.562385 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1209 05:00:38.566218 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1209 05:00:38.575340 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1209 05:00:38.579326 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1209 05:00:38.588007 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1209 05:00:38.592102 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1209 05:00:38.601067 1642009 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1209 05:00:38.606177 1642009 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1209 05:00:38.615233 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:00:38.634475 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 05:00:38.654525 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:00:38.673756 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 05:00:38.692564 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1209 05:00:38.711555 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 05:00:38.730257 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:00:38.750331 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:00:38.768763 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:00:38.790798 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 05:00:38.810810 1642009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 05:00:38.830060 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1209 05:00:38.844664 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1209 05:00:38.861044 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1209 05:00:38.875598 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1209 05:00:38.893265 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1209 05:00:38.912576 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1209 05:00:38.928118 1642009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1209 05:00:38.945294 1642009 ssh_runner.go:195] Run: openssl version
	I1209 05:00:38.952502 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 05:00:38.962333 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 05:00:38.970437 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 05:00:38.974389 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 05:00:38.974507 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 05:00:39.016908 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:00:39.025248 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/15805212.pem /etc/ssl/certs/3ec20f2e.0
	I1209 05:00:39.034686 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:00:39.042447 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:00:39.050956 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:00:39.054996 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:00:39.055090 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:00:39.097497 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:00:39.106317 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1209 05:00:39.114413 1642009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 05:00:39.122498 1642009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 05:00:39.130745 1642009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 05:00:39.134827 1642009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 05:00:39.134946 1642009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 05:00:39.176965 1642009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:00:39.185013 1642009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1580521.pem /etc/ssl/certs/51391683.0
	I1209 05:00:39.192849 1642009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:00:39.197148 1642009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 05:00:39.197200 1642009 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1209 05:00:39.197291 1642009 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-634473-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:00:39.197316 1642009 kube-vip.go:115] generating kube-vip config ...
	I1209 05:00:39.197368 1642009 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1209 05:00:39.210031 1642009 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:00:39.210140 1642009 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1209 05:00:39.210238 1642009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 05:00:39.218559 1642009 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:00:39.218658 1642009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1209 05:00:39.226858 1642009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 05:00:39.240491 1642009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 05:00:39.254053 1642009 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1209 05:00:39.269058 1642009 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1209 05:00:39.272916 1642009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:00:39.283710 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:00:39.443726 1642009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:00:39.460637 1642009 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:00:39.460964 1642009 start.go:318] joinCluster: &{Name:ha-634473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-634473 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:00:39.461180 1642009 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1209 05:00:39.461247 1642009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:00:39.493328 1642009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:00:39.674697 1642009 start.go:344] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 05:00:39.674808 1642009 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kkf675.jcpo7j1udk5dkndw --discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-634473-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I1209 05:01:02.581554 1642009 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kkf675.jcpo7j1udk5dkndw --discovery-token-ca-cert-hash sha256:7776204d6c5f563a8dabf61d61a81585bb99fbd1023d362d699de436ef3f27fb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-634473-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (22.906707242s)
	I1209 05:01:02.581631 1642009 ssh_runner.go:195] Run: sudo /bin/bash -c "systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet"
	I1209 05:01:02.952798 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-634473-m03 minikube.k8s.io/updated_at=2025_12_09T05_01_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=ha-634473 minikube.k8s.io/primary=false
	I1209 05:01:03.069480 1642009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-634473-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1209 05:01:03.200143 1642009 start.go:320] duration metric: took 23.739174421s to joinCluster
	I1209 05:01:03.200210 1642009 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 05:01:03.200593 1642009 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:01:03.203459 1642009 out.go:179] * Verifying Kubernetes components...
	I1209 05:01:03.206501 1642009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:01:03.358065 1642009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:01:03.378013 1642009 kapi.go:59] client config for ha-634473: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1209 05:01:03.378092 1642009 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1209 05:01:03.378348 1642009 node_ready.go:35] waiting up to 6m0s for node "ha-634473-m03" to be "Ready" ...
	W1209 05:01:05.455801 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:07.881792 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:09.882255 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:11.882296 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:13.882420 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:16.382764 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:18.885680 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:21.386105 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:23.882318 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:26.382866 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:28.383570 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:30.882308 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:33.384955 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:35.882738 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:38.384922 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:40.881799 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:43.382910 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:45.385179 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:47.882221 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:49.882763 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:51.883171 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:54.382233 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:56.382407 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	W1209 05:01:58.882540 1642009 node_ready.go:57] node "ha-634473-m03" has "Ready":"False" status (will retry)
	I1209 05:01:59.381861 1642009 node_ready.go:49] node "ha-634473-m03" is "Ready"
	I1209 05:01:59.381889 1642009 node_ready.go:38] duration metric: took 56.003522713s for node "ha-634473-m03" to be "Ready" ...
	I1209 05:01:59.381904 1642009 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:01:59.381966 1642009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:01:59.406221 1642009 api_server.go:72] duration metric: took 56.205972063s to wait for apiserver process to appear ...
	I1209 05:01:59.406259 1642009 api_server.go:88] waiting for apiserver healthz status ...
	I1209 05:01:59.406283 1642009 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 05:01:59.414980 1642009 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 05:01:59.415962 1642009 api_server.go:141] control plane version: v1.34.2
	I1209 05:01:59.415991 1642009 api_server.go:131] duration metric: took 9.724671ms to wait for apiserver health ...
	I1209 05:01:59.416002 1642009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 05:01:59.422385 1642009 system_pods.go:59] 24 kube-system pods found
	I1209 05:01:59.422426 1642009 system_pods.go:61] "coredns-66bc5c9577-8gdn9" [8b616706-5f8f-4db4-b56c-3ace5945f813] Running
	I1209 05:01:59.422433 1642009 system_pods.go:61] "coredns-66bc5c9577-qrw4s" [471319b3-f124-40ec-9787-1b5eaa2bedbe] Running
	I1209 05:01:59.422439 1642009 system_pods.go:61] "etcd-ha-634473" [a29ccfb1-81a2-4f00-aff8-79e53cec8ee9] Running
	I1209 05:01:59.422444 1642009 system_pods.go:61] "etcd-ha-634473-m02" [3c82b820-82d0-4eb7-ba8e-03d9f5a871b4] Running
	I1209 05:01:59.422448 1642009 system_pods.go:61] "etcd-ha-634473-m03" [70ba8114-a860-473b-9b17-eba3cb3f4c3d] Running
	I1209 05:01:59.422454 1642009 system_pods.go:61] "kindnet-5k2gt" [ec3a4877-3ce2-475c-a765-31c686a555ed] Running
	I1209 05:01:59.422459 1642009 system_pods.go:61] "kindnet-f5qsh" [af8c25eb-8628-4ce3-beb4-9d442f496cd8] Running
	I1209 05:01:59.422462 1642009 system_pods.go:61] "kindnet-vtmtm" [1b820731-ba18-4927-b1d9-9a5c514337f4] Running
	I1209 05:01:59.422468 1642009 system_pods.go:61] "kube-apiserver-ha-634473" [c011f362-595f-4d2e-8a14-73ef15a8a48a] Running
	I1209 05:01:59.422472 1642009 system_pods.go:61] "kube-apiserver-ha-634473-m02" [136387ab-b011-4d26-adf8-1b82b3614f80] Running
	I1209 05:01:59.422476 1642009 system_pods.go:61] "kube-apiserver-ha-634473-m03" [47c44e3d-99b0-4327-8661-6cc18db23adf] Running
	I1209 05:01:59.422487 1642009 system_pods.go:61] "kube-controller-manager-ha-634473" [c275f2a5-e0c0-458d-bc27-4ed495c4e515] Running
	I1209 05:01:59.422491 1642009 system_pods.go:61] "kube-controller-manager-ha-634473-m02" [1a7c119c-f4a3-4bf4-8e73-2444c3b52a4d] Running
	I1209 05:01:59.422505 1642009 system_pods.go:61] "kube-controller-manager-ha-634473-m03" [66e3385e-e3cb-4039-b271-7a1b35b1bae1] Running
	I1209 05:01:59.422508 1642009 system_pods.go:61] "kube-proxy-2424h" [062bc5a9-5538-433c-b11a-2551047c6a80] Running
	I1209 05:01:59.422512 1642009 system_pods.go:61] "kube-proxy-bbwbg" [cdf69bd0-252e-4145-849f-8fac817460f0] Running
	I1209 05:01:59.422517 1642009 system_pods.go:61] "kube-proxy-m98rs" [b453d4e6-7379-4f15-8730-9217ef931335] Running
	I1209 05:01:59.422529 1642009 system_pods.go:61] "kube-scheduler-ha-634473" [9a3e5278-8c90-490d-90c0-d26cf385145c] Running
	I1209 05:01:59.422533 1642009 system_pods.go:61] "kube-scheduler-ha-634473-m02" [a09f6822-4742-4883-9418-be4ba7f13867] Running
	I1209 05:01:59.422538 1642009 system_pods.go:61] "kube-scheduler-ha-634473-m03" [35fe3916-2ff7-4f3e-b577-19ba2cc835d5] Running
	I1209 05:01:59.422545 1642009 system_pods.go:61] "kube-vip-ha-634473" [d1ca5157-f1ed-46d6-84cf-2b6f46a66e90] Running
	I1209 05:01:59.422549 1642009 system_pods.go:61] "kube-vip-ha-634473-m02" [285fea86-7d95-4747-93f6-f45b3bee0509] Running
	I1209 05:01:59.422561 1642009 system_pods.go:61] "kube-vip-ha-634473-m03" [5aa03511-7ca9-4485-9661-33b7518f4a4d] Running
	I1209 05:01:59.422565 1642009 system_pods.go:61] "storage-provisioner" [f371a424-f103-4f41-bb24-cd91e405167f] Running
	I1209 05:01:59.422684 1642009 system_pods.go:74] duration metric: took 6.674866ms to wait for pod list to return data ...
	I1209 05:01:59.422703 1642009 default_sa.go:34] waiting for default service account to be created ...
	I1209 05:01:59.426425 1642009 default_sa.go:45] found service account: "default"
	I1209 05:01:59.426450 1642009 default_sa.go:55] duration metric: took 3.74073ms for default service account to be created ...
	I1209 05:01:59.426462 1642009 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 05:01:59.436878 1642009 system_pods.go:86] 24 kube-system pods found
	I1209 05:01:59.436967 1642009 system_pods.go:89] "coredns-66bc5c9577-8gdn9" [8b616706-5f8f-4db4-b56c-3ace5945f813] Running
	I1209 05:01:59.436991 1642009 system_pods.go:89] "coredns-66bc5c9577-qrw4s" [471319b3-f124-40ec-9787-1b5eaa2bedbe] Running
	I1209 05:01:59.437015 1642009 system_pods.go:89] "etcd-ha-634473" [a29ccfb1-81a2-4f00-aff8-79e53cec8ee9] Running
	I1209 05:01:59.437048 1642009 system_pods.go:89] "etcd-ha-634473-m02" [3c82b820-82d0-4eb7-ba8e-03d9f5a871b4] Running
	I1209 05:01:59.437073 1642009 system_pods.go:89] "etcd-ha-634473-m03" [70ba8114-a860-473b-9b17-eba3cb3f4c3d] Running
	I1209 05:01:59.437095 1642009 system_pods.go:89] "kindnet-5k2gt" [ec3a4877-3ce2-475c-a765-31c686a555ed] Running
	I1209 05:01:59.437117 1642009 system_pods.go:89] "kindnet-f5qsh" [af8c25eb-8628-4ce3-beb4-9d442f496cd8] Running
	I1209 05:01:59.437149 1642009 system_pods.go:89] "kindnet-vtmtm" [1b820731-ba18-4927-b1d9-9a5c514337f4] Running
	I1209 05:01:59.437170 1642009 system_pods.go:89] "kube-apiserver-ha-634473" [c011f362-595f-4d2e-8a14-73ef15a8a48a] Running
	I1209 05:01:59.437189 1642009 system_pods.go:89] "kube-apiserver-ha-634473-m02" [136387ab-b011-4d26-adf8-1b82b3614f80] Running
	I1209 05:01:59.437212 1642009 system_pods.go:89] "kube-apiserver-ha-634473-m03" [47c44e3d-99b0-4327-8661-6cc18db23adf] Running
	I1209 05:01:59.437231 1642009 system_pods.go:89] "kube-controller-manager-ha-634473" [c275f2a5-e0c0-458d-bc27-4ed495c4e515] Running
	I1209 05:01:59.437262 1642009 system_pods.go:89] "kube-controller-manager-ha-634473-m02" [1a7c119c-f4a3-4bf4-8e73-2444c3b52a4d] Running
	I1209 05:01:59.437288 1642009 system_pods.go:89] "kube-controller-manager-ha-634473-m03" [66e3385e-e3cb-4039-b271-7a1b35b1bae1] Running
	I1209 05:01:59.437309 1642009 system_pods.go:89] "kube-proxy-2424h" [062bc5a9-5538-433c-b11a-2551047c6a80] Running
	I1209 05:01:59.437328 1642009 system_pods.go:89] "kube-proxy-bbwbg" [cdf69bd0-252e-4145-849f-8fac817460f0] Running
	I1209 05:01:59.437336 1642009 system_pods.go:89] "kube-proxy-m98rs" [b453d4e6-7379-4f15-8730-9217ef931335] Running
	I1209 05:01:59.437340 1642009 system_pods.go:89] "kube-scheduler-ha-634473" [9a3e5278-8c90-490d-90c0-d26cf385145c] Running
	I1209 05:01:59.437344 1642009 system_pods.go:89] "kube-scheduler-ha-634473-m02" [a09f6822-4742-4883-9418-be4ba7f13867] Running
	I1209 05:01:59.437349 1642009 system_pods.go:89] "kube-scheduler-ha-634473-m03" [35fe3916-2ff7-4f3e-b577-19ba2cc835d5] Running
	I1209 05:01:59.437379 1642009 system_pods.go:89] "kube-vip-ha-634473" [d1ca5157-f1ed-46d6-84cf-2b6f46a66e90] Running
	I1209 05:01:59.437398 1642009 system_pods.go:89] "kube-vip-ha-634473-m02" [285fea86-7d95-4747-93f6-f45b3bee0509] Running
	I1209 05:01:59.437418 1642009 system_pods.go:89] "kube-vip-ha-634473-m03" [5aa03511-7ca9-4485-9661-33b7518f4a4d] Running
	I1209 05:01:59.437437 1642009 system_pods.go:89] "storage-provisioner" [f371a424-f103-4f41-bb24-cd91e405167f] Running
	I1209 05:01:59.437472 1642009 system_pods.go:126] duration metric: took 11.002422ms to wait for k8s-apps to be running ...
	I1209 05:01:59.437500 1642009 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 05:01:59.437592 1642009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:01:59.464414 1642009 system_svc.go:56] duration metric: took 26.895333ms WaitForService to wait for kubelet
	I1209 05:01:59.464508 1642009 kubeadm.go:587] duration metric: took 56.264261661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:01:59.464538 1642009 node_conditions.go:102] verifying NodePressure condition ...
	I1209 05:01:59.468134 1642009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:01:59.468175 1642009 node_conditions.go:123] node cpu capacity is 2
	I1209 05:01:59.468191 1642009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:01:59.468196 1642009 node_conditions.go:123] node cpu capacity is 2
	I1209 05:01:59.468200 1642009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:01:59.468204 1642009 node_conditions.go:123] node cpu capacity is 2
	I1209 05:01:59.468208 1642009 node_conditions.go:105] duration metric: took 3.664257ms to run NodePressure ...
	I1209 05:01:59.468222 1642009 start.go:242] waiting for startup goroutines ...
	I1209 05:01:59.468257 1642009 start.go:256] writing updated cluster config ...
	I1209 05:01:59.468655 1642009 ssh_runner.go:195] Run: rm -f paused
	I1209 05:01:59.473592 1642009 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 05:01:59.474207 1642009 kapi.go:59] client config for ha-634473: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/ha-634473/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 05:01:59.493816 1642009 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gdn9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.503543 1642009 pod_ready.go:94] pod "coredns-66bc5c9577-8gdn9" is "Ready"
	I1209 05:01:59.503618 1642009 pod_ready.go:86] duration metric: took 9.72966ms for pod "coredns-66bc5c9577-8gdn9" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.503645 1642009 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qrw4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.510906 1642009 pod_ready.go:94] pod "coredns-66bc5c9577-qrw4s" is "Ready"
	I1209 05:01:59.510974 1642009 pod_ready.go:86] duration metric: took 7.309233ms for pod "coredns-66bc5c9577-qrw4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.514916 1642009 pod_ready.go:83] waiting for pod "etcd-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.523862 1642009 pod_ready.go:94] pod "etcd-ha-634473" is "Ready"
	I1209 05:01:59.523936 1642009 pod_ready.go:86] duration metric: took 8.941855ms for pod "etcd-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.523962 1642009 pod_ready.go:83] waiting for pod "etcd-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.530554 1642009 pod_ready.go:94] pod "etcd-ha-634473-m02" is "Ready"
	I1209 05:01:59.530674 1642009 pod_ready.go:86] duration metric: took 6.690144ms for pod "etcd-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.530700 1642009 pod_ready.go:83] waiting for pod "etcd-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:01:59.676007 1642009 request.go:683] "Waited before sending request" delay="145.16626ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-634473-m03"
	I1209 05:01:59.875818 1642009 request.go:683] "Waited before sending request" delay="196.306512ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m03"
	I1209 05:01:59.879082 1642009 pod_ready.go:94] pod "etcd-ha-634473-m03" is "Ready"
	I1209 05:01:59.879113 1642009 pod_ready.go:86] duration metric: took 348.392296ms for pod "etcd-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:00.084708 1642009 request.go:683] "Waited before sending request" delay="205.483907ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1209 05:02:00.122462 1642009 pod_ready.go:83] waiting for pod "kube-apiserver-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:00.277488 1642009 request.go:683] "Waited before sending request" delay="154.316781ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-634473"
	I1209 05:02:00.476571 1642009 request.go:683] "Waited before sending request" delay="178.205419ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473"
	I1209 05:02:00.480412 1642009 pod_ready.go:94] pod "kube-apiserver-ha-634473" is "Ready"
	I1209 05:02:00.480443 1642009 pod_ready.go:86] duration metric: took 357.370544ms for pod "kube-apiserver-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:00.480491 1642009 pod_ready.go:83] waiting for pod "kube-apiserver-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:00.675849 1642009 request.go:683] "Waited before sending request" delay="195.264217ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-634473-m02"
	I1209 05:02:00.875853 1642009 request.go:683] "Waited before sending request" delay="196.26431ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m02"
	I1209 05:02:00.879544 1642009 pod_ready.go:94] pod "kube-apiserver-ha-634473-m02" is "Ready"
	I1209 05:02:00.879576 1642009 pod_ready.go:86] duration metric: took 399.075935ms for pod "kube-apiserver-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:00.879587 1642009 pod_ready.go:83] waiting for pod "kube-apiserver-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:01.075949 1642009 request.go:683] "Waited before sending request" delay="196.255702ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-634473-m03"
	I1209 05:02:01.276132 1642009 request.go:683] "Waited before sending request" delay="196.2402ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m03"
	I1209 05:02:01.279969 1642009 pod_ready.go:94] pod "kube-apiserver-ha-634473-m03" is "Ready"
	I1209 05:02:01.279999 1642009 pod_ready.go:86] duration metric: took 400.404538ms for pod "kube-apiserver-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:01.476256 1642009 request.go:683] "Waited before sending request" delay="196.155882ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1209 05:02:01.481441 1642009 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:01.676377 1642009 request.go:683] "Waited before sending request" delay="194.820448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-634473"
	I1209 05:02:01.876513 1642009 request.go:683] "Waited before sending request" delay="196.339845ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473"
	I1209 05:02:01.880003 1642009 pod_ready.go:94] pod "kube-controller-manager-ha-634473" is "Ready"
	I1209 05:02:01.880037 1642009 pod_ready.go:86] duration metric: took 398.560383ms for pod "kube-controller-manager-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:01.880049 1642009 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:02.076427 1642009 request.go:683] "Waited before sending request" delay="196.296856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-634473-m02"
	I1209 05:02:02.276356 1642009 request.go:683] "Waited before sending request" delay="196.320658ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m02"
	I1209 05:02:02.279903 1642009 pod_ready.go:94] pod "kube-controller-manager-ha-634473-m02" is "Ready"
	I1209 05:02:02.279985 1642009 pod_ready.go:86] duration metric: took 399.928356ms for pod "kube-controller-manager-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:02.280023 1642009 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:02.476385 1642009 request.go:683] "Waited before sending request" delay="196.264846ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-634473-m03"
	I1209 05:02:02.675908 1642009 request.go:683] "Waited before sending request" delay="196.252907ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m03"
	I1209 05:02:02.679360 1642009 pod_ready.go:94] pod "kube-controller-manager-ha-634473-m03" is "Ready"
	I1209 05:02:02.679390 1642009 pod_ready.go:86] duration metric: took 399.346345ms for pod "kube-controller-manager-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:02.875780 1642009 request.go:683] "Waited before sending request" delay="196.271599ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1209 05:02:02.880241 1642009 pod_ready.go:83] waiting for pod "kube-proxy-2424h" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:03.076512 1642009 request.go:683] "Waited before sending request" delay="196.171075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2424h"
	I1209 05:02:03.276189 1642009 request.go:683] "Waited before sending request" delay="196.306305ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m03"
	I1209 05:02:03.280228 1642009 pod_ready.go:94] pod "kube-proxy-2424h" is "Ready"
	I1209 05:02:03.280262 1642009 pod_ready.go:86] duration metric: took 399.993704ms for pod "kube-proxy-2424h" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:03.280274 1642009 pod_ready.go:83] waiting for pod "kube-proxy-bbwbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:03.476636 1642009 request.go:683] "Waited before sending request" delay="196.2468ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbwbg"
	I1209 05:02:03.676466 1642009 request.go:683] "Waited before sending request" delay="196.334612ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m02"
	I1209 05:02:03.680177 1642009 pod_ready.go:94] pod "kube-proxy-bbwbg" is "Ready"
	I1209 05:02:03.680203 1642009 pod_ready.go:86] duration metric: took 399.922571ms for pod "kube-proxy-bbwbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:03.680212 1642009 pod_ready.go:83] waiting for pod "kube-proxy-m98rs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:03.876355 1642009 request.go:683] "Waited before sending request" delay="196.060917ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m98rs"
	I1209 05:02:04.076008 1642009 request.go:683] "Waited before sending request" delay="196.278274ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473"
	I1209 05:02:04.079517 1642009 pod_ready.go:94] pod "kube-proxy-m98rs" is "Ready"
	I1209 05:02:04.079543 1642009 pod_ready.go:86] duration metric: took 399.324241ms for pod "kube-proxy-m98rs" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:04.275822 1642009 request.go:683] "Waited before sending request" delay="196.174788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1209 05:02:04.280507 1642009 pod_ready.go:83] waiting for pod "kube-scheduler-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:04.475844 1642009 request.go:683] "Waited before sending request" delay="195.219636ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-634473"
	I1209 05:02:04.676640 1642009 request.go:683] "Waited before sending request" delay="197.349759ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473"
	I1209 05:02:04.679920 1642009 pod_ready.go:94] pod "kube-scheduler-ha-634473" is "Ready"
	I1209 05:02:04.679956 1642009 pod_ready.go:86] duration metric: took 399.419262ms for pod "kube-scheduler-ha-634473" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:04.679968 1642009 pod_ready.go:83] waiting for pod "kube-scheduler-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:04.876365 1642009 request.go:683] "Waited before sending request" delay="196.288956ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-634473-m02"
	I1209 05:02:05.076701 1642009 request.go:683] "Waited before sending request" delay="196.379779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m02"
	I1209 05:02:05.080221 1642009 pod_ready.go:94] pod "kube-scheduler-ha-634473-m02" is "Ready"
	I1209 05:02:05.080250 1642009 pod_ready.go:86] duration metric: took 400.275729ms for pod "kube-scheduler-ha-634473-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:05.080261 1642009 pod_ready.go:83] waiting for pod "kube-scheduler-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:05.276747 1642009 request.go:683] "Waited before sending request" delay="196.389484ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-634473-m03"
	I1209 05:02:05.476155 1642009 request.go:683] "Waited before sending request" delay="196.320248ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-634473-m03"
	I1209 05:02:05.479513 1642009 pod_ready.go:94] pod "kube-scheduler-ha-634473-m03" is "Ready"
	I1209 05:02:05.479543 1642009 pod_ready.go:86] duration metric: took 399.275467ms for pod "kube-scheduler-ha-634473-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:02:05.479557 1642009 pod_ready.go:40] duration metric: took 6.005882197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 05:02:05.545783 1642009 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1209 05:02:05.549051 1642009 out.go:179] * Done! kubectl is now configured to use "ha-634473" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 04:59:44 ha-634473 crio[839]: time="2025-12-09T04:59:44.86452321Z" level=info msg="Starting container: d7b90c6c403e97128628de6f6ab995d234de539e1d460ea513660cfada0aee35" id=7ca7d73b-e2c8-438a-ab02-7ccb42e955e2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 04:59:44 ha-634473 crio[839]: time="2025-12-09T04:59:44.866900901Z" level=info msg="Started container" PID=1854 containerID=d4f3a44406af8ee65fa9737b44fb57f47d2feac58277d34a64ebcf613526d9a2 description=kube-system/coredns-66bc5c9577-qrw4s/coredns id=68bd4c6c-5d6c-4d1c-b50a-4b0cd39b9b50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f0fdc7e6bf6f9a3762315d28aa86e9e6b075a0e6a85bd1ebbd5a68a14e06b18
	Dec 09 04:59:44 ha-634473 crio[839]: time="2025-12-09T04:59:44.875570449Z" level=info msg="Started container" PID=1860 containerID=d7b90c6c403e97128628de6f6ab995d234de539e1d460ea513660cfada0aee35 description=kube-system/coredns-66bc5c9577-8gdn9/coredns id=7ca7d73b-e2c8-438a-ab02-7ccb42e955e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9caaa8d406f056dca942c2eaad76735955021105266f4a55220ae7be74006f2e
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.093297891Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-bp5sh/POD" id=29a9dce8-624d-4c4b-980f-f473bd06da1b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.093377302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.111764525Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-bp5sh Namespace:default ID:c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53 UID:78028b06-52d2-4b18-8209-004382ee7d00 NetNS:/var/run/netns/21da750d-baf9-481c-9fa7-dc43df9576dd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078da0}] Aliases:map[]}"
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.111827427Z" level=info msg="Adding pod default_busybox-7b57f96db7-bp5sh to CNI network \"kindnet\" (type=ptp)"
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.150606563Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-bp5sh Namespace:default ID:c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53 UID:78028b06-52d2-4b18-8209-004382ee7d00 NetNS:/var/run/netns/21da750d-baf9-481c-9fa7-dc43df9576dd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078da0}] Aliases:map[]}"
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.151750781Z" level=info msg="Checking pod default_busybox-7b57f96db7-bp5sh for CNI network kindnet (type=ptp)"
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.166075611Z" level=info msg="Ran pod sandbox c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53 with infra container: default/busybox-7b57f96db7-bp5sh/POD" id=29a9dce8-624d-4c4b-980f-f473bd06da1b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.175474712Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=e30651ac-9f82-4276-bb75-6ea3e92b119e name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.17561855Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=e30651ac-9f82-4276-bb75-6ea3e92b119e name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.175655983Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28 found" id=e30651ac-9f82-4276-bb75-6ea3e92b119e name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.177778607Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=d7bed8ea-39c4-4e3b-9412-d13a0ee2bef0 name=/runtime.v1.ImageService/PullImage
	Dec 09 05:02:09 ha-634473 crio[839]: time="2025-12-09T05:02:09.182277907Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.225495305Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=d7bed8ea-39c4-4e3b-9412-d13a0ee2bef0 name=/runtime.v1.ImageService/PullImage
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.226300916Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=820a1b06-9ae2-4514-ac57-645cdde2a339 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.228211671Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=5c63be98-90ac-4e27-a067-0bee19626132 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.234153263Z" level=info msg="Creating container: default/busybox-7b57f96db7-bp5sh/busybox" id=8d56245d-9f8b-4885-bc94-d788972691a6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.23470067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.241050721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.241718492Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.263512317Z" level=info msg="Created container 8a856ac05da353efd17c887d52806c8750ddeb20908946cef768f1d0af18bda0: default/busybox-7b57f96db7-bp5sh/busybox" id=8d56245d-9f8b-4885-bc94-d788972691a6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.267259661Z" level=info msg="Starting container: 8a856ac05da353efd17c887d52806c8750ddeb20908946cef768f1d0af18bda0" id=3830d99d-8f40-4acc-a3f3-f75c7dab8d50 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:02:11 ha-634473 crio[839]: time="2025-12-09T05:02:11.272251705Z" level=info msg="Started container" PID=2025 containerID=8a856ac05da353efd17c887d52806c8750ddeb20908946cef768f1d0af18bda0 description=default/busybox-7b57f96db7-bp5sh/busybox id=3830d99d-8f40-4acc-a3f3-f75c7dab8d50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	8a856ac05da35       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   10 minutes ago      Running             busybox                   0                   c7b42bf55b771       busybox-7b57f96db7-bp5sh            default
	d7b90c6c403e9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 minutes ago      Running             coredns                   0                   9caaa8d406f05       coredns-66bc5c9577-8gdn9            kube-system
	d4f3a44406af8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 minutes ago      Running             coredns                   0                   0f0fdc7e6bf6f       coredns-66bc5c9577-qrw4s            kube-system
	7969f5318f500       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 minutes ago      Running             storage-provisioner       0                   959cb045ec903       storage-provisioner                 kube-system
	bc8bb5fd26005       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      13 minutes ago      Running             kube-proxy                0                   1cd3ac0a484d1       kube-proxy-m98rs                    kube-system
	923ab0e3190a2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      13 minutes ago      Running             kindnet-cni               0                   6fea8d6674335       kindnet-vtmtm                       kube-system
	817a31332419d       ghcr.io/kube-vip/kube-vip@sha256:74581ff5ab80d8bd25e525d4066eb06614fd65c953d7a38e710a59d42399d439     13 minutes ago      Running             kube-vip                  0                   f10a00367f891       kube-vip-ha-634473                  kube-system
	38a7faffca46f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      13 minutes ago      Running             kube-scheduler            0                   121e0bd871f83       kube-scheduler-ha-634473            kube-system
	b2a5be9725102       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      13 minutes ago      Running             kube-controller-manager   0                   c4f57035b9f31       kube-controller-manager-ha-634473   kube-system
	9760dd4805579       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      13 minutes ago      Running             etcd                      0                   4bb1b47764be9       etcd-ha-634473                      kube-system
	f22a05924eab1       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      13 minutes ago      Running             kube-apiserver            0                   4f8ac5d35a35b       kube-apiserver-ha-634473            kube-system
	
	
	==> coredns [d4f3a44406af8ee65fa9737b44fb57f47d2feac58277d34a64ebcf613526d9a2] <==
	[INFO] 10.244.2.2:48924 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.00176782s
	[INFO] 10.244.2.2:43550 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.012198725s
	[INFO] 10.244.0.4:55523 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000686454s
	[INFO] 10.244.1.2:53993 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000845915s
	[INFO] 10.244.2.2:56767 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002497614s
	[INFO] 10.244.2.2:57765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133032s
	[INFO] 10.244.2.2:60243 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000238232s
	[INFO] 10.244.2.2:40171 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002214794s
	[INFO] 10.244.0.4:55738 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151084s
	[INFO] 10.244.0.4:47087 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002141751s
	[INFO] 10.244.0.4:54869 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163777s
	[INFO] 10.244.0.4:56601 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001184415s
	[INFO] 10.244.0.4:51257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189944s
	[INFO] 10.244.1.2:36913 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134674s
	[INFO] 10.244.1.2:51366 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002200886s
	[INFO] 10.244.1.2:35134 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117713s
	[INFO] 10.244.1.2:47896 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159429s
	[INFO] 10.244.2.2:45312 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104478s
	[INFO] 10.244.0.4:34475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129496s
	[INFO] 10.244.1.2:40398 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146103s
	[INFO] 10.244.1.2:34077 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149213s
	[INFO] 10.244.2.2:44769 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122111s
	[INFO] 10.244.2.2:53138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103567s
	[INFO] 10.244.1.2:38635 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073733s
	[INFO] 10.244.1.2:42958 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112445s
	
	
	==> coredns [d7b90c6c403e97128628de6f6ab995d234de539e1d460ea513660cfada0aee35] <==
	[INFO] 10.244.2.2:57357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134525s
	[INFO] 10.244.2.2:42583 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104125s
	[INFO] 10.244.0.4:33780 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147112s
	[INFO] 10.244.0.4:56836 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139736s
	[INFO] 10.244.0.4:32774 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123096s
	[INFO] 10.244.1.2:39419 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002606366s
	[INFO] 10.244.1.2:48548 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015878s
	[INFO] 10.244.1.2:47596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173394s
	[INFO] 10.244.1.2:56565 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009482s
	[INFO] 10.244.2.2:58979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139719s
	[INFO] 10.244.2.2:59238 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000288144s
	[INFO] 10.244.2.2:41884 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00016491s
	[INFO] 10.244.0.4:46697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108629s
	[INFO] 10.244.0.4:43509 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139244s
	[INFO] 10.244.0.4:58850 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060899s
	[INFO] 10.244.1.2:40411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253682s
	[INFO] 10.244.1.2:60590 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088248s
	[INFO] 10.244.2.2:48986 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015453s
	[INFO] 10.244.2.2:35708 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000178802s
	[INFO] 10.244.0.4:43280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108392s
	[INFO] 10.244.0.4:45610 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160462s
	[INFO] 10.244.0.4:58692 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010575s
	[INFO] 10.244.0.4:51181 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063312s
	[INFO] 10.244.1.2:37249 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133082s
	[INFO] 10.244.1.2:34998 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094525s
	
	
	==> describe nodes <==
	Name:               ha-634473
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-634473
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=ha-634473
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T04_58_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 04:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-634473
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 05:12:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 05:12:13 +0000   Tue, 09 Dec 2025 04:58:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 05:12:13 +0000   Tue, 09 Dec 2025 04:58:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 05:12:13 +0000   Tue, 09 Dec 2025 04:58:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 05:12:13 +0000   Tue, 09 Dec 2025 04:59:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-634473
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                6406ae0c-6df0-4db2-9cf5-a860432b2b4f
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bp5sh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-8gdn9             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-qrw4s             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-634473                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-vtmtm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-634473             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-634473    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-m98rs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-634473             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-634473                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-634473 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node ha-634473 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-634473 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node ha-634473 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node ha-634473 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node ha-634473 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node ha-634473 event: Registered Node ha-634473 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-634473 event: Registered Node ha-634473 in Controller
	  Normal   NodeReady                12m                kubelet          Node ha-634473 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-634473 event: Registered Node ha-634473 in Controller
	
	
	Name:               ha-634473-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-634473-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=ha-634473
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_09T04_59_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 04:59:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-634473-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 05:03:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 09 Dec 2025 05:03:16 +0000   Tue, 09 Dec 2025 05:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 09 Dec 2025 05:03:16 +0000   Tue, 09 Dec 2025 05:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 09 Dec 2025 05:03:16 +0000   Tue, 09 Dec 2025 05:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 09 Dec 2025 05:03:16 +0000   Tue, 09 Dec 2025 05:04:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-634473-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                74dbc6fe-ceef-4c47-b089-91f02281a369
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dt58k                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-634473-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-5k2gt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-634473-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-634473-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bbwbg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-634473-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-634473-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-634473-m02 event: Registered Node ha-634473-m02 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-634473-m02 event: Registered Node ha-634473-m02 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-634473-m02 event: Registered Node ha-634473-m02 in Controller
	  Normal  NodeNotReady    7m54s  node-controller  Node ha-634473-m02 status is now: NodeNotReady
	
	
	Name:               ha-634473-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-634473-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=ha-634473
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_09T05_01_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 05:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-634473-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 05:12:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 05:12:15 +0000   Tue, 09 Dec 2025 05:01:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 05:12:15 +0000   Tue, 09 Dec 2025 05:01:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 05:12:15 +0000   Tue, 09 Dec 2025 05:01:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 05:12:15 +0000   Tue, 09 Dec 2025 05:01:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-634473-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                b01c15a9-5361-4818-baf5-d73cd10932ec
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5fvp7                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-634473-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-f5qsh                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-634473-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-634473-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2424h                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-634473-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-634473-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        10m   kube-proxy       
	  Normal  RegisteredNode  11m   node-controller  Node ha-634473-m03 event: Registered Node ha-634473-m03 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-634473-m03 event: Registered Node ha-634473-m03 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node ha-634473-m03 event: Registered Node ha-634473-m03 in Controller
	
	
	Name:               ha-634473-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-634473-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=ha-634473
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_09T05_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 05:02:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-634473-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 05:12:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 05:10:01 +0000   Tue, 09 Dec 2025 05:02:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 05:10:01 +0000   Tue, 09 Dec 2025 05:02:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 05:10:01 +0000   Tue, 09 Dec 2025 05:02:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 05:10:01 +0000   Tue, 09 Dec 2025 05:03:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-634473-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                be457499-244e-4f8e-82cb-1b92a57f4888
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-4sst6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kindnet-4kqkl               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m48s
	  kube-system                 kube-proxy-7dmtq            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m48s (x3 over 9m48s)  kubelet          Node ha-634473-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x3 over 9m48s)  kubelet          Node ha-634473-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x3 over 9m48s)  kubelet          Node ha-634473-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m47s                  node-controller  Node ha-634473-m04 event: Registered Node ha-634473-m04 in Controller
	  Normal  RegisteredNode           9m45s                  node-controller  Node ha-634473-m04 event: Registered Node ha-634473-m04 in Controller
	  Normal  RegisteredNode           9m44s                  node-controller  Node ha-634473-m04 event: Registered Node ha-634473-m04 in Controller
	  Normal  NodeReady                9m5s                   kubelet          Node ha-634473-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 9 03:35] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:15] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 9 04:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:41] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:58] overlayfs: idmapped layers are currently not supported
	[Dec 9 04:59] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:00] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:02] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9760dd48055791f4001b110071712cf63598299c0464e6a9a1b27c04aa7b061c] <==
	{"level":"warn","ts":"2025-12-09T05:11:50.475054Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7975177128cc846a","rtt":"3.749412ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:11:52.730755Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:11:52.730810Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:11:55.469601Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7975177128cc846a","rtt":"17.878136ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:11:55.475840Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7975177128cc846a","rtt":"3.749412ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:11:56.732380Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:11:56.732431Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:00.471147Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7975177128cc846a","rtt":"17.878136ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:00.476477Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7975177128cc846a","rtt":"3.749412ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:00.733481Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:00.733537Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:04.735248Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:04.735338Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:05.472178Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7975177128cc846a","rtt":"17.878136ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:05.477395Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7975177128cc846a","rtt":"3.749412ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:08.736870Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:08.737044Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:10.472471Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7975177128cc846a","rtt":"17.878136ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:10.477648Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7975177128cc846a","rtt":"3.749412ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:12.743132Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:12.743188Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:15.472788Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7975177128cc846a","rtt":"17.878136ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:15.478016Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7975177128cc846a","rtt":"3.749412ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:16.744737Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-09T05:12:16.744791Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"7975177128cc846a","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 05:12:19 up  9:54,  0 user,  load average: 1.56, 1.14, 0.94
	Linux ha-634473 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [923ab0e3190a2e69a27fc26680a7a84ee8a694ed53445ca94434f1f802cdd712] <==
	I1209 05:11:44.249981       1 main.go:324] Node ha-634473-m04 has CIDR [10.244.3.0/24] 
	I1209 05:11:54.254896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 05:11:54.255007       1 main.go:301] handling current node
	I1209 05:11:54.255047       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1209 05:11:54.255075       1 main.go:324] Node ha-634473-m02 has CIDR [10.244.1.0/24] 
	I1209 05:11:54.255266       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1209 05:11:54.255304       1 main.go:324] Node ha-634473-m03 has CIDR [10.244.2.0/24] 
	I1209 05:11:54.255390       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1209 05:11:54.255425       1 main.go:324] Node ha-634473-m04 has CIDR [10.244.3.0/24] 
	I1209 05:12:04.247268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 05:12:04.247304       1 main.go:301] handling current node
	I1209 05:12:04.247320       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1209 05:12:04.247325       1 main.go:324] Node ha-634473-m02 has CIDR [10.244.1.0/24] 
	I1209 05:12:04.247511       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1209 05:12:04.247526       1 main.go:324] Node ha-634473-m03 has CIDR [10.244.2.0/24] 
	I1209 05:12:04.247590       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1209 05:12:04.247601       1 main.go:324] Node ha-634473-m04 has CIDR [10.244.3.0/24] 
	I1209 05:12:14.247238       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1209 05:12:14.247270       1 main.go:324] Node ha-634473-m02 has CIDR [10.244.1.0/24] 
	I1209 05:12:14.247457       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1209 05:12:14.247473       1 main.go:324] Node ha-634473-m03 has CIDR [10.244.2.0/24] 
	I1209 05:12:14.247541       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1209 05:12:14.247559       1 main.go:324] Node ha-634473-m04 has CIDR [10.244.3.0/24] 
	I1209 05:12:14.247622       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 05:12:14.247634       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced] <==
	I1209 04:58:56.487276       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 04:58:56.610487       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1209 04:58:56.618386       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1209 04:58:56.619937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 04:58:56.625486       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 04:58:57.534118       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 04:58:57.547889       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1209 04:58:57.574911       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1209 04:58:57.596990       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1209 04:59:03.235361       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 04:59:03.239842       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 04:59:03.532721       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1209 04:59:03.670858       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1209 05:02:12.228989       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1209 05:02:13.064965       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35624: use of closed network connection
	E1209 05:02:13.296836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35648: use of closed network connection
	E1209 05:02:13.747569       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35682: use of closed network connection
	E1209 05:02:14.246226       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35724: use of closed network connection
	E1209 05:02:14.461743       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35742: use of closed network connection
	E1209 05:02:15.062879       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35774: use of closed network connection
	E1209 05:02:15.274760       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35792: use of closed network connection
	E1209 05:02:15.700495       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35824: use of closed network connection
	E1209 05:02:16.144705       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:35856: use of closed network connection
	W1209 05:03:56.629783       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1209 05:08:54.569703       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b2a5be97251029e7fd96c8d68bbd049c3d767694655c066795cc3e2cc670e9d1] <==
	I1209 04:59:02.627277       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 04:59:02.628320       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1209 04:59:02.628437       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 04:59:02.629233       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1209 04:59:02.629339       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 04:59:02.630481       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 04:59:02.630608       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 04:59:40.807009       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-634473-m02\" does not exist"
	I1209 04:59:40.852522       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-634473-m02" podCIDRs=["10.244.1.0/24"]
	I1209 04:59:42.586990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-634473-m02"
	I1209 04:59:47.587767       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1209 05:01:01.413295       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-g5q8x failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-g5q8x\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1209 05:01:02.031545       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-634473-m03\" does not exist"
	I1209 05:01:02.092037       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-634473-m03" podCIDRs=["10.244.2.0/24"]
	I1209 05:01:02.620076       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-634473-m03"
	E1209 05:02:07.586841       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1209 05:02:31.610473       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-rq7gn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-rq7gn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1209 05:02:31.636042       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-rq7gn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-rq7gn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1209 05:02:31.871015       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-634473-m04\" does not exist"
	I1209 05:02:31.926890       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-634473-m04" podCIDRs=["10.244.3.0/24"]
	I1209 05:02:32.634050       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-634473-m04"
	I1209 05:03:14.458530       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-634473-m04"
	I1209 05:04:25.796403       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-634473-m04"
	I1209 05:09:26.076196       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-dt58k"
	E1209 05:09:26.283325       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [bc8bb5fd260054d8d305b2ce482d9e981c9db8dfbabc6f4d549dedbd4bcbfe09] <==
	I1209 04:59:04.135736       1 server_linux.go:53] "Using iptables proxy"
	I1209 04:59:04.265895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 04:59:04.368780       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 04:59:04.368814       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1209 04:59:04.368888       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 04:59:04.457912       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 04:59:04.457965       1 server_linux.go:132] "Using iptables Proxier"
	I1209 04:59:04.469487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 04:59:04.469865       1 server.go:527] "Version info" version="v1.34.2"
	I1209 04:59:04.470068       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 04:59:04.471422       1 config.go:200] "Starting service config controller"
	I1209 04:59:04.471486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 04:59:04.471543       1 config.go:106] "Starting endpoint slice config controller"
	I1209 04:59:04.471572       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 04:59:04.471609       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 04:59:04.471637       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 04:59:04.472290       1 config.go:309] "Starting node config controller"
	I1209 04:59:04.472356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 04:59:04.472395       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 04:59:04.572345       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 04:59:04.572381       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 04:59:04.572436       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [38a7faffca46f95586f1c39e7344e38a5d3440b82f7be0c16cde002d6f598a12] <==
	I1209 05:02:06.892530       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="31b37665-cc65-451a-912b-9ec4c3ebe9d5" pod="default/busybox-7b57f96db7-4ldtk" assumedNode="ha-634473-m03" currentNode="ha-634473-m02"
	E1209 05:02:06.920953       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4ldtk\": pod busybox-7b57f96db7-4ldtk is already assigned to node \"ha-634473-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4ldtk" node="ha-634473-m02"
	E1209 05:02:06.921088       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 31b37665-cc65-451a-912b-9ec4c3ebe9d5(default/busybox-7b57f96db7-4ldtk) was assumed on ha-634473-m02 but assigned to ha-634473-m03" logger="UnhandledError" pod="default/busybox-7b57f96db7-4ldtk"
	E1209 05:02:06.921161       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4ldtk\": pod busybox-7b57f96db7-4ldtk is already assigned to node \"ha-634473-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4ldtk"
	I1209 05:02:06.923156       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4ldtk" node="ha-634473-m03"
	E1209 05:02:09.171251       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5fvp7\": pod busybox-7b57f96db7-5fvp7 is already assigned to node \"ha-634473-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5fvp7" node="ha-634473-m03"
	E1209 05:02:09.171312       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a321b0b3-873f-4fa7-901c-6b4cad900cee(default/busybox-7b57f96db7-5fvp7) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-5fvp7"
	E1209 05:02:09.171335       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5fvp7\": pod busybox-7b57f96db7-5fvp7 is already assigned to node \"ha-634473-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5fvp7"
	I1209 05:02:09.174235       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5fvp7" node="ha-634473-m03"
	E1209 05:02:32.045388       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4kqkl\": pod kindnet-4kqkl is already assigned to node \"ha-634473-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4kqkl" node="ha-634473-m04"
	E1209 05:02:32.045493       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a08767e9-f272-4660-8447-1f5c894ed31d(kube-system/kindnet-4kqkl) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-4kqkl"
	E1209 05:02:32.045516       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4kqkl\": pod kindnet-4kqkl is already assigned to node \"ha-634473-m04\"" logger="UnhandledError" pod="kube-system/kindnet-4kqkl"
	I1209 05:02:32.047421       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4kqkl" node="ha-634473-m04"
	E1209 05:02:32.067983       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7dmtq\": pod kube-proxy-7dmtq is already assigned to node \"ha-634473-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7dmtq" node="ha-634473-m04"
	E1209 05:02:32.068114       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod e46a5d09-a53e-495e-9428-ed9d3143b94f(kube-system/kube-proxy-7dmtq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-7dmtq"
	E1209 05:02:32.068170       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7dmtq\": pod kube-proxy-7dmtq is already assigned to node \"ha-634473-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-7dmtq"
	I1209 05:02:32.072672       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7dmtq" node="ha-634473-m04"
	E1209 05:02:32.220904       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9f4t7\": pod kindnet-9f4t7 is already assigned to node \"ha-634473-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9f4t7" node="ha-634473-m04"
	E1209 05:02:32.221038       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod afdf094c-8dd4-4868-a50a-ecaf3a9e10e4(kube-system/kindnet-9f4t7) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-9f4t7"
	E1209 05:02:32.221097       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9f4t7\": pod kindnet-9f4t7 is already assigned to node \"ha-634473-m04\"" logger="UnhandledError" pod="kube-system/kindnet-9f4t7"
	I1209 05:02:32.225414       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9f4t7" node="ha-634473-m04"
	E1209 05:09:27.620308       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4sst6\": pod busybox-7b57f96db7-4sst6 is already assigned to node \"ha-634473-m04\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-4sst6" node="ha-634473-m04"
	E1209 05:09:27.620366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod f9263760-f82a-402b-9d24-5ca2743e97af(default/busybox-7b57f96db7-4sst6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-4sst6"
	E1209 05:09:27.620388       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-4sst6\": pod busybox-7b57f96db7-4sst6 is already assigned to node \"ha-634473-m04\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-4sst6"
	I1209 05:09:27.623822       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-4sst6" node="ha-634473-m04"
	
	
	==> kubelet <==
	Dec 09 04:59:44 ha-634473 kubelet[1363]: I1209 04:59:44.531231    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64c68\" (UniqueName: \"kubernetes.io/projected/471319b3-f124-40ec-9787-1b5eaa2bedbe-kube-api-access-64c68\") pod \"coredns-66bc5c9577-qrw4s\" (UID: \"471319b3-f124-40ec-9787-1b5eaa2bedbe\") " pod="kube-system/coredns-66bc5c9577-qrw4s"
	Dec 09 04:59:44 ha-634473 kubelet[1363]: I1209 04:59:44.531344    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b616706-5f8f-4db4-b56c-3ace5945f813-config-volume\") pod \"coredns-66bc5c9577-8gdn9\" (UID: \"8b616706-5f8f-4db4-b56c-3ace5945f813\") " pod="kube-system/coredns-66bc5c9577-8gdn9"
	Dec 09 04:59:44 ha-634473 kubelet[1363]: W1209 04:59:44.767311    1363 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio-959cb045ec903985f7fb0853cd089abc33acfcf4f21b5c006c4d9898974755a5 WatchSource:0}: Error finding container 959cb045ec903985f7fb0853cd089abc33acfcf4f21b5c006c4d9898974755a5: Status 404 returned error can't find the container with id 959cb045ec903985f7fb0853cd089abc33acfcf4f21b5c006c4d9898974755a5
	Dec 09 04:59:44 ha-634473 kubelet[1363]: W1209 04:59:44.776520    1363 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio-0f0fdc7e6bf6f9a3762315d28aa86e9e6b075a0e6a85bd1ebbd5a68a14e06b18 WatchSource:0}: Error finding container 0f0fdc7e6bf6f9a3762315d28aa86e9e6b075a0e6a85bd1ebbd5a68a14e06b18: Status 404 returned error can't find the container with id 0f0fdc7e6bf6f9a3762315d28aa86e9e6b075a0e6a85bd1ebbd5a68a14e06b18
	Dec 09 04:59:44 ha-634473 kubelet[1363]: W1209 04:59:44.804142    1363 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio-9caaa8d406f056dca942c2eaad76735955021105266f4a55220ae7be74006f2e WatchSource:0}: Error finding container 9caaa8d406f056dca942c2eaad76735955021105266f4a55220ae7be74006f2e: Status 404 returned error can't find the container with id 9caaa8d406f056dca942c2eaad76735955021105266f4a55220ae7be74006f2e
	Dec 09 04:59:45 ha-634473 kubelet[1363]: I1209 04:59:45.760581    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8gdn9" podStartSLOduration=42.760561633 podStartE2EDuration="42.760561633s" podCreationTimestamp="2025-12-09 04:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 04:59:45.710210012 +0000 UTC m=+48.348429561" watchObservedRunningTime="2025-12-09 04:59:45.760561633 +0000 UTC m=+48.398781182"
	Dec 09 04:59:45 ha-634473 kubelet[1363]: I1209 04:59:45.879675    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qrw4s" podStartSLOduration=42.879655095 podStartE2EDuration="42.879655095s" podCreationTimestamp="2025-12-09 04:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 04:59:45.837306505 +0000 UTC m=+48.475526071" watchObservedRunningTime="2025-12-09 04:59:45.879655095 +0000 UTC m=+48.517874636"
	Dec 09 05:02:06 ha-634473 kubelet[1363]: I1209 05:02:06.980885    1363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=182.980857648 podStartE2EDuration="3m2.980857648s" podCreationTimestamp="2025-12-09 04:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 04:59:45.88109638 +0000 UTC m=+48.519315930" watchObservedRunningTime="2025-12-09 05:02:06.980857648 +0000 UTC m=+189.619077189"
	Dec 09 05:02:07 ha-634473 kubelet[1363]: E1209 05:02:07.068885    1363 status_manager.go:1018] "Failed to get status for pod" err="pods \"busybox-7b57f96db7-bp5sh\" is forbidden: User \"system:node:ha-634473\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-634473' and this object" podUID="78028b06-52d2-4b18-8209-004382ee7d00" pod="default/busybox-7b57f96db7-bp5sh"
	Dec 09 05:02:07 ha-634473 kubelet[1363]: E1209 05:02:07.071580    1363 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ha-634473\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'ha-634473' and this object" logger="UnhandledError" reflector="object-\"default\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 09 05:02:07 ha-634473 kubelet[1363]: I1209 05:02:07.071871    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dnvm\" (UniqueName: \"kubernetes.io/projected/78028b06-52d2-4b18-8209-004382ee7d00-kube-api-access-6dnvm\") pod \"busybox-7b57f96db7-bp5sh\" (UID: \"78028b06-52d2-4b18-8209-004382ee7d00\") " pod="default/busybox-7b57f96db7-bp5sh"
	Dec 09 05:02:07 ha-634473 kubelet[1363]: I1209 05:02:07.174657    1363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zftd\" (UniqueName: \"kubernetes.io/projected/ee9e3390-5321-451a-86b9-356518731dcb-kube-api-access-6zftd\") pod \"busybox-7b57f96db7-dfxnd\" (UID: \"ee9e3390-5321-451a-86b9-356518731dcb\") " pod="default/busybox-7b57f96db7-dfxnd"
	Dec 09 05:02:07 ha-634473 kubelet[1363]: E1209 05:02:07.338698    1363 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-6zftd], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-7b57f96db7-dfxnd" podUID="ee9e3390-5321-451a-86b9-356518731dcb"
	Dec 09 05:02:08 ha-634473 kubelet[1363]: E1209 05:02:08.293090    1363 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 09 05:02:08 ha-634473 kubelet[1363]: E1209 05:02:08.293138    1363 projected.go:196] Error preparing data for projected volume kube-api-access-6dnvm for pod default/busybox-7b57f96db7-bp5sh: failed to sync configmap cache: timed out waiting for the condition
	Dec 09 05:02:08 ha-634473 kubelet[1363]: E1209 05:02:08.293242    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78028b06-52d2-4b18-8209-004382ee7d00-kube-api-access-6dnvm podName:78028b06-52d2-4b18-8209-004382ee7d00 nodeName:}" failed. No retries permitted until 2025-12-09 05:02:08.793215394 +0000 UTC m=+191.431434926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6dnvm" (UniqueName: "kubernetes.io/projected/78028b06-52d2-4b18-8209-004382ee7d00-kube-api-access-6dnvm") pod "busybox-7b57f96db7-bp5sh" (UID: "78028b06-52d2-4b18-8209-004382ee7d00") : failed to sync configmap cache: timed out waiting for the condition
	Dec 09 05:02:08 ha-634473 kubelet[1363]: E1209 05:02:08.458145    1363 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 09 05:02:08 ha-634473 kubelet[1363]: E1209 05:02:08.458201    1363 projected.go:196] Error preparing data for projected volume kube-api-access-6zftd for pod default/busybox-7b57f96db7-dfxnd: failed to sync configmap cache: timed out waiting for the condition
	Dec 09 05:02:08 ha-634473 kubelet[1363]: E1209 05:02:08.458328    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee9e3390-5321-451a-86b9-356518731dcb-kube-api-access-6zftd podName:ee9e3390-5321-451a-86b9-356518731dcb nodeName:}" failed. No retries permitted until 2025-12-09 05:02:08.958295221 +0000 UTC m=+191.596514943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6zftd" (UniqueName: "kubernetes.io/projected/ee9e3390-5321-451a-86b9-356518731dcb-kube-api-access-6zftd") pod "busybox-7b57f96db7-dfxnd" (UID: "ee9e3390-5321-451a-86b9-356518731dcb") : failed to sync configmap cache: timed out waiting for the condition
	Dec 09 05:02:09 ha-634473 kubelet[1363]: I1209 05:02:09.102883    1363 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zftd\" (UniqueName: \"kubernetes.io/projected/ee9e3390-5321-451a-86b9-356518731dcb-kube-api-access-6zftd\") pod \"ee9e3390-5321-451a-86b9-356518731dcb\" (UID: \"ee9e3390-5321-451a-86b9-356518731dcb\") "
	Dec 09 05:02:09 ha-634473 kubelet[1363]: I1209 05:02:09.105398    1363 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9e3390-5321-451a-86b9-356518731dcb-kube-api-access-6zftd" (OuterVolumeSpecName: "kube-api-access-6zftd") pod "ee9e3390-5321-451a-86b9-356518731dcb" (UID: "ee9e3390-5321-451a-86b9-356518731dcb"). InnerVolumeSpecName "kube-api-access-6zftd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 09 05:02:09 ha-634473 kubelet[1363]: W1209 05:02:09.164706    1363 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio-c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53 WatchSource:0}: Error finding container c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53: Status 404 returned error can't find the container with id c7b42bf55b7717bf1641ddb32a83a965e7bbff3ed399d9b7ee00c88af466bd53
	Dec 09 05:02:09 ha-634473 kubelet[1363]: I1209 05:02:09.203766    1363 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zftd\" (UniqueName: \"kubernetes.io/projected/ee9e3390-5321-451a-86b9-356518731dcb-kube-api-access-6zftd\") on node \"ha-634473\" DevicePath \"\""
	Dec 09 05:02:09 ha-634473 kubelet[1363]: I1209 05:02:09.494811    1363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9e3390-5321-451a-86b9-356518731dcb" path="/var/lib/kubelet/pods/ee9e3390-5321-451a-86b9-356518731dcb/volumes"
	Dec 09 05:02:13 ha-634473 kubelet[1363]: E1209 05:02:13.071143    1363 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45648->127.0.0.1:37597: write tcp 127.0.0.1:45648->127.0.0.1:37597: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-634473 -n ha-634473
helpers_test.go:269: (dbg) Run:  kubectl --context ha-634473 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (509.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-114438 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-114438 --output=json --user=testUser: exit status 80 (1.795914645s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"152b6642-0840-44bb-b8e0-e8d93e27521e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-114438 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"77fc7803-7997-498e-a809-2d1e629db22b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-09T05:20:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"7e9567d0-1789-4864-8fc5-f0a900cf551d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-114438 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.80s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-114438 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-114438 --output=json --user=testUser: exit status 80 (1.7578326s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23cfdf81-ed0f-47a3-a325-c43460cbc035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-114438 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"4b110638-4bd4-4a16-a321-9233023511a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-09T05:20:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b0e737c8-9cb8-4c96-85a6-74a05f65f5f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-114438 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (784.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.933613204s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-054206
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-054206: (1.365531638s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-054206 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-054206 status --format={{.Host}}: exit status 7 (72.772837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1209 05:41:21.780628 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m20.659500319s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-054206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-054206" primary control-plane node in "kubernetes-upgrade-054206" cluster
	* Pulling base image v0.0.48-1765184860-22066 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:41:00.748141 1771230 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:41:00.748268 1771230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:41:00.748280 1771230 out.go:374] Setting ErrFile to fd 2...
	I1209 05:41:00.748285 1771230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:41:00.748546 1771230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:41:00.748942 1771230 out.go:368] Setting JSON to false
	I1209 05:41:00.749989 1771230 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":37401,"bootTime":1765221460,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 05:41:00.750100 1771230 start.go:143] virtualization:  
	I1209 05:41:00.755190 1771230 out.go:179] * [kubernetes-upgrade-054206] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:41:00.758061 1771230 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:41:00.758185 1771230 notify.go:221] Checking for updates...
	I1209 05:41:00.763953 1771230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:41:00.766879 1771230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:41:00.769636 1771230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 05:41:00.772465 1771230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:41:00.775420 1771230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:41:00.778938 1771230 config.go:182] Loaded profile config "kubernetes-upgrade-054206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1209 05:41:00.779549 1771230 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:41:00.818714 1771230 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:41:00.818840 1771230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:41:00.880183 1771230 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:41:00.870743428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:41:00.880290 1771230 docker.go:319] overlay module found
	I1209 05:41:00.883647 1771230 out.go:179] * Using the docker driver based on existing profile
	I1209 05:41:00.886439 1771230 start.go:309] selected driver: docker
	I1209 05:41:00.886460 1771230 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-054206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-054206 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:41:00.886622 1771230 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:41:00.887304 1771230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:41:00.944735 1771230 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:41:00.935413645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:41:00.945088 1771230 cni.go:84] Creating CNI manager for ""
	I1209 05:41:00.945171 1771230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 05:41:00.945215 1771230 start.go:353] cluster config:
	{Name:kubernetes-upgrade-054206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-054206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:41:00.948440 1771230 out.go:179] * Starting "kubernetes-upgrade-054206" primary control-plane node in "kubernetes-upgrade-054206" cluster
	I1209 05:41:00.951335 1771230 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 05:41:00.954313 1771230 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:41:00.957341 1771230 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 05:41:00.957391 1771230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 05:41:00.957402 1771230 cache.go:65] Caching tarball of preloaded images
	I1209 05:41:00.957426 1771230 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:41:00.957492 1771230 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 05:41:00.957503 1771230 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1209 05:41:00.957618 1771230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/config.json ...
	I1209 05:41:00.978180 1771230 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:41:00.978204 1771230 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:41:00.978220 1771230 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:41:00.978252 1771230 start.go:360] acquireMachinesLock for kubernetes-upgrade-054206: {Name:mkc93d41626b42a8d262bc06808be98e1a74de6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:41:00.978318 1771230 start.go:364] duration metric: took 40.78µs to acquireMachinesLock for "kubernetes-upgrade-054206"
	I1209 05:41:00.978342 1771230 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:41:00.978352 1771230 fix.go:54] fixHost starting: 
	I1209 05:41:00.978653 1771230 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-054206 --format={{.State.Status}}
	I1209 05:41:00.995967 1771230 fix.go:112] recreateIfNeeded on kubernetes-upgrade-054206: state=Stopped err=<nil>
	W1209 05:41:00.996008 1771230 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:41:00.999298 1771230 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-054206" ...
	I1209 05:41:00.999455 1771230 cli_runner.go:164] Run: docker start kubernetes-upgrade-054206
	I1209 05:41:01.283453 1771230 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-054206 --format={{.State.Status}}
	I1209 05:41:01.306244 1771230 kic.go:430] container "kubernetes-upgrade-054206" state is running.
	I1209 05:41:01.306697 1771230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-054206
	I1209 05:41:01.328892 1771230 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/config.json ...
	I1209 05:41:01.330416 1771230 machine.go:94] provisionDockerMachine start ...
	I1209 05:41:01.330517 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:01.354676 1771230 main.go:143] libmachine: Using SSH client type: native
	I1209 05:41:01.355007 1771230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34501 <nil> <nil>}
	I1209 05:41:01.355017 1771230 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:41:01.355645 1771230 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1209 05:41:04.514527 1771230 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-054206
	
	I1209 05:41:04.514552 1771230 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-054206"
	I1209 05:41:04.514681 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:04.533070 1771230 main.go:143] libmachine: Using SSH client type: native
	I1209 05:41:04.533494 1771230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34501 <nil> <nil>}
	I1209 05:41:04.533516 1771230 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-054206 && echo "kubernetes-upgrade-054206" | sudo tee /etc/hostname
	I1209 05:41:04.696978 1771230 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-054206
	
	I1209 05:41:04.697056 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:04.716234 1771230 main.go:143] libmachine: Using SSH client type: native
	I1209 05:41:04.716549 1771230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34501 <nil> <nil>}
	I1209 05:41:04.716571 1771230 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-054206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-054206/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-054206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:41:04.870993 1771230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:41:04.871019 1771230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 05:41:04.871049 1771230 ubuntu.go:190] setting up certificates
	I1209 05:41:04.871067 1771230 provision.go:84] configureAuth start
	I1209 05:41:04.871133 1771230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-054206
	I1209 05:41:04.888158 1771230 provision.go:143] copyHostCerts
	I1209 05:41:04.888225 1771230 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 05:41:04.888236 1771230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:41:04.888313 1771230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 05:41:04.888410 1771230 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 05:41:04.888416 1771230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:41:04.888444 1771230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 05:41:04.888494 1771230 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 05:41:04.888498 1771230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:41:04.888521 1771230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 05:41:04.888618 1771230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-054206 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-054206 localhost minikube]
	I1209 05:41:05.205500 1771230 provision.go:177] copyRemoteCerts
	I1209 05:41:05.205572 1771230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:41:05.205624 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:05.231348 1771230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34501 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/kubernetes-upgrade-054206/id_rsa Username:docker}
	I1209 05:41:05.342474 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:41:05.364477 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1209 05:41:05.383672 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 05:41:05.403438 1771230 provision.go:87] duration metric: took 532.346268ms to configureAuth
	I1209 05:41:05.403466 1771230 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:41:05.403665 1771230 config.go:182] Loaded profile config "kubernetes-upgrade-054206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 05:41:05.403774 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:05.421487 1771230 main.go:143] libmachine: Using SSH client type: native
	I1209 05:41:05.421878 1771230 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34501 <nil> <nil>}
	I1209 05:41:05.421912 1771230 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 05:41:05.786895 1771230 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 05:41:05.786924 1771230 machine.go:97] duration metric: took 4.456481335s to provisionDockerMachine
	I1209 05:41:05.786937 1771230 start.go:293] postStartSetup for "kubernetes-upgrade-054206" (driver="docker")
	I1209 05:41:05.786949 1771230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:41:05.787028 1771230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:41:05.787072 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:05.804697 1771230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34501 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/kubernetes-upgrade-054206/id_rsa Username:docker}
	I1209 05:41:05.915565 1771230 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:41:05.919014 1771230 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:41:05.919042 1771230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:41:05.919054 1771230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 05:41:05.919131 1771230 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 05:41:05.919249 1771230 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 05:41:05.919373 1771230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:41:05.927043 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:41:05.947920 1771230 start.go:296] duration metric: took 160.967849ms for postStartSetup
	I1209 05:41:05.948069 1771230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:41:05.948165 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:05.965812 1771230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34501 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/kubernetes-upgrade-054206/id_rsa Username:docker}
	I1209 05:41:06.072746 1771230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:41:06.078190 1771230 fix.go:56] duration metric: took 5.099830283s for fixHost
	I1209 05:41:06.078220 1771230 start.go:83] releasing machines lock for "kubernetes-upgrade-054206", held for 5.099889459s
	I1209 05:41:06.078307 1771230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-054206
	I1209 05:41:06.097865 1771230 ssh_runner.go:195] Run: cat /version.json
	I1209 05:41:06.097899 1771230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:41:06.097925 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:06.097958 1771230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-054206
	I1209 05:41:06.120372 1771230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34501 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/kubernetes-upgrade-054206/id_rsa Username:docker}
	I1209 05:41:06.136535 1771230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34501 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/kubernetes-upgrade-054206/id_rsa Username:docker}
	I1209 05:41:06.313743 1771230 ssh_runner.go:195] Run: systemctl --version
	I1209 05:41:06.320494 1771230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 05:41:06.357713 1771230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:41:06.362881 1771230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:41:06.362955 1771230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:41:06.370892 1771230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:41:06.370920 1771230 start.go:496] detecting cgroup driver to use...
	I1209 05:41:06.370952 1771230 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:41:06.371002 1771230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 05:41:06.385927 1771230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 05:41:06.399360 1771230 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:41:06.399473 1771230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:41:06.415577 1771230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:41:06.428698 1771230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:41:06.543069 1771230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:41:06.668953 1771230 docker.go:234] disabling docker service ...
	I1209 05:41:06.669076 1771230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:41:06.684744 1771230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:41:06.699221 1771230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:41:06.818087 1771230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:41:06.953178 1771230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:41:06.966756 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:41:06.983193 1771230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 05:41:06.983341 1771230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:06.992780 1771230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 05:41:06.992878 1771230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:07.002793 1771230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:07.014124 1771230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:07.025135 1771230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:41:07.034142 1771230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:07.043936 1771230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:07.052877 1771230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:41:07.062213 1771230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:41:07.070312 1771230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:41:07.078349 1771230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:41:07.194691 1771230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 05:41:07.374140 1771230 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 05:41:07.374241 1771230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 05:41:07.378226 1771230 start.go:564] Will wait 60s for crictl version
	I1209 05:41:07.378319 1771230 ssh_runner.go:195] Run: which crictl
	I1209 05:41:07.381870 1771230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:41:07.406910 1771230 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 05:41:07.407047 1771230 ssh_runner.go:195] Run: crio --version
	I1209 05:41:07.438391 1771230 ssh_runner.go:195] Run: crio --version
	I1209 05:41:07.472458 1771230 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1209 05:41:07.475278 1771230 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-054206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:41:07.490853 1771230 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 05:41:07.495021 1771230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:41:07.506912 1771230 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-054206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-054206 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:41:07.507023 1771230 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 05:41:07.507085 1771230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:41:07.545493 1771230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1209 05:41:07.545567 1771230 ssh_runner.go:195] Run: which lz4
	I1209 05:41:07.555068 1771230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 05:41:07.558770 1771230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 05:41:07.558827 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1209 05:41:09.060238 1771230 crio.go:462] duration metric: took 1.505247472s to copy over tarball
	I1209 05:41:09.060393 1771230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 05:41:11.257572 1771230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197128236s)
	I1209 05:41:11.257645 1771230 crio.go:469] duration metric: took 2.19732096s to extract the tarball
	I1209 05:41:11.257658 1771230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 05:41:11.295923 1771230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:41:11.329019 1771230 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 05:41:11.329043 1771230 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:41:11.329051 1771230 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1209 05:41:11.329159 1771230 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-054206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-054206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:41:11.329246 1771230 ssh_runner.go:195] Run: crio config
	I1209 05:41:11.398096 1771230 cni.go:84] Creating CNI manager for ""
	I1209 05:41:11.398124 1771230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 05:41:11.398144 1771230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:41:11.398171 1771230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-054206 NodeName:kubernetes-upgrade-054206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:41:11.398305 1771230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-054206"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:41:11.398388 1771230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1209 05:41:11.407748 1771230 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:41:11.407872 1771230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:41:11.416065 1771230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1209 05:41:11.429655 1771230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1209 05:41:11.443124 1771230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1209 05:41:11.457928 1771230 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:41:11.462567 1771230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 05:41:11.472846 1771230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:41:11.605277 1771230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:41:11.625104 1771230 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206 for IP: 192.168.85.2
	I1209 05:41:11.625184 1771230 certs.go:195] generating shared ca certs ...
	I1209 05:41:11.625215 1771230 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:41:11.625399 1771230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 05:41:11.625482 1771230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 05:41:11.625516 1771230 certs.go:257] generating profile certs ...
	I1209 05:41:11.625648 1771230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/client.key
	I1209 05:41:11.625789 1771230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/apiserver.key.44cc63b1
	I1209 05:41:11.625862 1771230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/proxy-client.key
	I1209 05:41:11.626021 1771230 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 05:41:11.626083 1771230 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 05:41:11.626109 1771230 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:41:11.626175 1771230 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:41:11.626261 1771230 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:41:11.626321 1771230 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 05:41:11.626403 1771230 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:41:11.627073 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:41:11.650179 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 05:41:11.671594 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:41:11.693745 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 05:41:11.712547 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1209 05:41:11.730512 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:41:11.749032 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:41:11.767526 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:41:11.787425 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 05:41:11.807355 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 05:41:11.826351 1771230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:41:11.845612 1771230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:41:11.859622 1771230 ssh_runner.go:195] Run: openssl version
	I1209 05:41:11.869663 1771230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:41:11.879319 1771230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:41:11.888978 1771230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:41:11.893662 1771230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:41:11.893750 1771230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:41:11.936535 1771230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:41:11.944471 1771230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 05:41:11.951985 1771230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 05:41:11.959620 1771230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 05:41:11.963595 1771230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 05:41:11.963704 1771230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 05:41:12.007062 1771230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:41:12.016982 1771230 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 05:41:12.025428 1771230 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 05:41:12.033869 1771230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 05:41:12.038229 1771230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 05:41:12.038356 1771230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 05:41:12.079998 1771230 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:41:12.088825 1771230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:41:12.092812 1771230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:41:12.136395 1771230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:41:12.180532 1771230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:41:12.223401 1771230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:41:12.265989 1771230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:41:12.313668 1771230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:41:12.358482 1771230 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-054206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-054206 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:41:12.358668 1771230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 05:41:12.358782 1771230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:41:12.397072 1771230 cri.go:89] found id: ""
	I1209 05:41:12.397194 1771230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:41:12.405504 1771230 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:41:12.405527 1771230 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:41:12.405585 1771230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:41:12.413226 1771230 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:41:12.413842 1771230 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-054206" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:41:12.414100 1771230 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-054206" cluster setting kubeconfig missing "kubernetes-upgrade-054206" context setting]
	I1209 05:41:12.414693 1771230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:41:12.415395 1771230 kapi.go:59] client config for kubernetes-upgrade-054206: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kubernetes-upgrade-054206/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 05:41:12.415926 1771230 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 05:41:12.415945 1771230 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 05:41:12.416042 1771230 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 05:41:12.416058 1771230 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 05:41:12.416063 1771230 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 05:41:12.416412 1771230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:41:12.427395 1771230 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 05:40:39.368544448 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 05:41:11.452835292 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-054206"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1209 05:41:12.427419 1771230 kubeadm.go:1161] stopping kube-system containers ...
	I1209 05:41:12.427431 1771230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 05:41:12.427511 1771230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:41:12.456964 1771230 cri.go:89] found id: ""
	I1209 05:41:12.457093 1771230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 05:41:12.471420 1771230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:41:12.479761 1771230 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  9 05:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  9 05:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  9 05:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec  9 05:40 /etc/kubernetes/scheduler.conf
	
	I1209 05:41:12.479834 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:41:12.489014 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:41:12.497239 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:41:12.511124 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:41:12.511248 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:41:12.519239 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:41:12.527139 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:41:12.527254 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:41:12.534523 1771230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:41:12.542218 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:41:12.592912 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:41:14.079185 1771230 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.486240605s)
	I1209 05:41:14.079261 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:41:14.285498 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:41:14.357508 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:41:14.402920 1771230 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:41:14.403004 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:14.903869 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:15.403295 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:15.903587 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:16.403786 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:16.903500 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:17.403826 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:17.904079 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:18.404047 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:18.903900 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:19.403332 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:19.903668 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:20.403766 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:20.903589 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:21.403858 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:21.903160 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:22.404044 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:22.903135 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:23.403339 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:23.904056 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:24.403800 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:24.903830 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:25.403191 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:25.903759 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:26.403836 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:26.903190 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:27.403121 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:27.903249 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:28.403900 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:28.904088 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:29.403767 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:29.903134 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:30.403159 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:30.903137 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:31.403826 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:31.904014 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:32.403484 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:32.903248 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:33.403478 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:33.903796 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:34.403416 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:34.903845 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:35.403259 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:35.903402 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:36.403165 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:36.903772 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:37.403173 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:37.903133 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:38.403807 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:38.903947 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:39.403691 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:39.903401 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:40.403100 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:40.903773 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:41.403132 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:41.903232 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:42.403078 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:42.903828 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:43.403700 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:43.903817 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:44.403323 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:44.903677 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:45.404016 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:45.903661 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:46.404038 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:46.903713 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:47.403554 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:47.903184 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:48.403517 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:48.903776 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:49.403164 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:49.903178 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:50.403825 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:50.903261 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:51.403156 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:51.903341 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:52.403138 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:52.903888 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:53.403161 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:53.903801 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:54.403173 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:54.903815 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:55.403132 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:55.903712 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:56.403343 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:56.904003 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:57.403180 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:57.903521 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:58.403175 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:58.903677 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:59.403353 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:41:59.903864 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:00.404074 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:00.903559 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:01.403237 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:01.903687 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:02.404059 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:02.903976 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:03.403385 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:03.903586 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:04.403835 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:04.903845 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:05.403778 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:05.903161 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:06.403357 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:06.903175 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:07.403782 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:07.903796 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:08.403939 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:08.903942 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:09.403143 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:09.903162 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:10.403807 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:10.903733 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:11.403184 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:11.904115 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:12.403348 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:12.903555 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:13.403723 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:13.903769 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:14.403126 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:14.403236 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:14.430166 1771230 cri.go:89] found id: ""
	I1209 05:42:14.430201 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.430209 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:14.430217 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:14.430280 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:14.457192 1771230 cri.go:89] found id: ""
	I1209 05:42:14.457218 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.457227 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:14.457233 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:14.457296 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:14.483280 1771230 cri.go:89] found id: ""
	I1209 05:42:14.483306 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.483316 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:14.483323 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:14.483387 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:14.519391 1771230 cri.go:89] found id: ""
	I1209 05:42:14.519416 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.519426 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:14.519433 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:14.519504 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:14.548650 1771230 cri.go:89] found id: ""
	I1209 05:42:14.548671 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.548690 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:14.548697 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:14.548768 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:14.579667 1771230 cri.go:89] found id: ""
	I1209 05:42:14.579688 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.579697 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:14.579704 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:14.579771 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:14.612982 1771230 cri.go:89] found id: ""
	I1209 05:42:14.613004 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.613012 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:14.613027 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:14.613083 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:14.641752 1771230 cri.go:89] found id: ""
	I1209 05:42:14.641774 1771230 logs.go:282] 0 containers: []
	W1209 05:42:14.641782 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:14.641791 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:14.641801 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:14.708467 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:14.708488 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:14.708503 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:14.741966 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:14.741997 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:14.771542 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:14.771572 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:14.839336 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:14.839372 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:17.357708 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:17.367979 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:17.368051 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:17.398447 1771230 cri.go:89] found id: ""
	I1209 05:42:17.398469 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.398478 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:17.398484 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:17.398542 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:17.434094 1771230 cri.go:89] found id: ""
	I1209 05:42:17.434116 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.434124 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:17.434130 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:17.434186 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:17.467380 1771230 cri.go:89] found id: ""
	I1209 05:42:17.467400 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.467408 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:17.467414 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:17.467471 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:17.498196 1771230 cri.go:89] found id: ""
	I1209 05:42:17.498217 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.498224 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:17.498230 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:17.498297 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:17.530483 1771230 cri.go:89] found id: ""
	I1209 05:42:17.530506 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.530514 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:17.530520 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:17.530607 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:17.555605 1771230 cri.go:89] found id: ""
	I1209 05:42:17.555670 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.555696 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:17.555715 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:17.555802 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:17.591024 1771230 cri.go:89] found id: ""
	I1209 05:42:17.591061 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.591070 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:17.591076 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:17.591143 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:17.626548 1771230 cri.go:89] found id: ""
	I1209 05:42:17.626669 1771230 logs.go:282] 0 containers: []
	W1209 05:42:17.626692 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:17.626714 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:17.626755 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:17.663885 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:17.663912 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:17.734190 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:17.734228 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:17.751558 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:17.751586 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:17.822698 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:17.822769 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:17.822796 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:20.355761 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:20.365801 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:20.365871 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:20.391655 1771230 cri.go:89] found id: ""
	I1209 05:42:20.391677 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.391685 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:20.391692 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:20.391750 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:20.418494 1771230 cri.go:89] found id: ""
	I1209 05:42:20.418612 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.418642 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:20.418662 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:20.418787 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:20.444657 1771230 cri.go:89] found id: ""
	I1209 05:42:20.444725 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.444749 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:20.444763 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:20.444837 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:20.474822 1771230 cri.go:89] found id: ""
	I1209 05:42:20.474893 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.474920 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:20.474934 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:20.475018 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:20.509103 1771230 cri.go:89] found id: ""
	I1209 05:42:20.509129 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.509138 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:20.509144 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:20.509209 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:20.536157 1771230 cri.go:89] found id: ""
	I1209 05:42:20.536184 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.536193 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:20.536200 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:20.536259 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:20.562732 1771230 cri.go:89] found id: ""
	I1209 05:42:20.562769 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.562779 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:20.562786 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:20.562857 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:20.596194 1771230 cri.go:89] found id: ""
	I1209 05:42:20.596218 1771230 logs.go:282] 0 containers: []
	W1209 05:42:20.596226 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:20.596236 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:20.596248 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:20.674491 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:20.674533 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:20.691779 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:20.691806 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:20.756107 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:20.756131 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:20.756144 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:20.787132 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:20.787165 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:23.320364 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:23.330478 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:23.330561 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:23.369218 1771230 cri.go:89] found id: ""
	I1209 05:42:23.369289 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.369313 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:23.369334 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:23.369453 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:23.395474 1771230 cri.go:89] found id: ""
	I1209 05:42:23.395496 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.395504 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:23.395510 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:23.395572 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:23.424044 1771230 cri.go:89] found id: ""
	I1209 05:42:23.424079 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.424088 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:23.424094 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:23.424159 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:23.452247 1771230 cri.go:89] found id: ""
	I1209 05:42:23.452269 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.452282 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:23.452288 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:23.452345 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:23.483066 1771230 cri.go:89] found id: ""
	I1209 05:42:23.483091 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.483101 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:23.483108 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:23.483213 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:23.517397 1771230 cri.go:89] found id: ""
	I1209 05:42:23.517468 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.517504 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:23.517527 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:23.517627 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:23.544284 1771230 cri.go:89] found id: ""
	I1209 05:42:23.544312 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.544321 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:23.544328 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:23.544389 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:23.570308 1771230 cri.go:89] found id: ""
	I1209 05:42:23.570334 1771230 logs.go:282] 0 containers: []
	W1209 05:42:23.570343 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:23.570351 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:23.570363 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:23.646340 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:23.646417 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:23.664897 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:23.664923 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:23.732896 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:23.732918 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:23.732931 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:23.764106 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:23.764181 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:26.295041 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:26.305618 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:26.305696 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:26.334527 1771230 cri.go:89] found id: ""
	I1209 05:42:26.334550 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.334559 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:26.334566 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:26.334659 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:26.364188 1771230 cri.go:89] found id: ""
	I1209 05:42:26.364282 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.364315 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:26.364345 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:26.364425 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:26.391848 1771230 cri.go:89] found id: ""
	I1209 05:42:26.391923 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.391946 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:26.391964 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:26.392055 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:26.423034 1771230 cri.go:89] found id: ""
	I1209 05:42:26.423059 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.423068 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:26.423074 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:26.423152 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:26.450314 1771230 cri.go:89] found id: ""
	I1209 05:42:26.450352 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.450361 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:26.450368 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:26.450504 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:26.476917 1771230 cri.go:89] found id: ""
	I1209 05:42:26.476943 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.476953 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:26.476959 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:26.477029 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:26.509707 1771230 cri.go:89] found id: ""
	I1209 05:42:26.509782 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.509819 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:26.509843 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:26.509933 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:26.539045 1771230 cri.go:89] found id: ""
	I1209 05:42:26.539122 1771230 logs.go:282] 0 containers: []
	W1209 05:42:26.539137 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:26.539147 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:26.539159 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:26.570966 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:26.570994 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:26.648512 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:26.648591 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:26.670476 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:26.670507 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:26.735653 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:26.735676 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:26.735691 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:29.271788 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:29.282370 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:29.282445 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:29.310711 1771230 cri.go:89] found id: ""
	I1209 05:42:29.310733 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.310742 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:29.310763 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:29.310848 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:29.339678 1771230 cri.go:89] found id: ""
	I1209 05:42:29.339703 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.339711 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:29.339717 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:29.339782 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:29.366840 1771230 cri.go:89] found id: ""
	I1209 05:42:29.366867 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.366877 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:29.366884 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:29.366949 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:29.392858 1771230 cri.go:89] found id: ""
	I1209 05:42:29.392884 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.392893 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:29.392899 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:29.392958 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:29.418677 1771230 cri.go:89] found id: ""
	I1209 05:42:29.418759 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.418774 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:29.418781 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:29.418840 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:29.443897 1771230 cri.go:89] found id: ""
	I1209 05:42:29.443920 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.443928 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:29.443934 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:29.444039 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:29.472952 1771230 cri.go:89] found id: ""
	I1209 05:42:29.472979 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.472988 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:29.472994 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:29.473054 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:29.505140 1771230 cri.go:89] found id: ""
	I1209 05:42:29.505167 1771230 logs.go:282] 0 containers: []
	W1209 05:42:29.505176 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:29.505185 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:29.505197 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:29.574945 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:29.575031 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:29.594820 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:29.594906 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:29.665834 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:29.665861 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:29.665874 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:29.701396 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:29.701431 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:32.231948 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:32.242375 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:32.242449 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:32.268371 1771230 cri.go:89] found id: ""
	I1209 05:42:32.268394 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.268404 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:32.268410 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:32.268469 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:32.296297 1771230 cri.go:89] found id: ""
	I1209 05:42:32.296323 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.296332 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:32.296338 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:32.296398 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:32.322286 1771230 cri.go:89] found id: ""
	I1209 05:42:32.322313 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.322322 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:32.322328 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:32.322385 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:32.351586 1771230 cri.go:89] found id: ""
	I1209 05:42:32.351608 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.351616 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:32.351622 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:32.351685 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:32.379404 1771230 cri.go:89] found id: ""
	I1209 05:42:32.379428 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.379437 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:32.379443 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:32.379502 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:32.404917 1771230 cri.go:89] found id: ""
	I1209 05:42:32.404939 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.404948 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:32.404954 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:32.405012 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:32.430775 1771230 cri.go:89] found id: ""
	I1209 05:42:32.430809 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.430820 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:32.430827 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:32.430888 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:32.456997 1771230 cri.go:89] found id: ""
	I1209 05:42:32.457020 1771230 logs.go:282] 0 containers: []
	W1209 05:42:32.457028 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:32.457037 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:32.457051 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:32.528057 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:32.528093 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:32.545336 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:32.545367 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:32.629143 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:32.629165 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:32.629180 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:32.663110 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:32.663146 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:35.195809 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:35.206194 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:35.206318 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:35.239294 1771230 cri.go:89] found id: ""
	I1209 05:42:35.239319 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.239328 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:35.239335 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:35.239400 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:35.266733 1771230 cri.go:89] found id: ""
	I1209 05:42:35.266759 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.266768 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:35.266774 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:35.266836 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:35.294320 1771230 cri.go:89] found id: ""
	I1209 05:42:35.294348 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.294358 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:35.294365 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:35.294430 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:35.320752 1771230 cri.go:89] found id: ""
	I1209 05:42:35.320828 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.320845 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:35.320852 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:35.320914 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:35.346972 1771230 cri.go:89] found id: ""
	I1209 05:42:35.346994 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.347002 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:35.347009 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:35.347066 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:35.373352 1771230 cri.go:89] found id: ""
	I1209 05:42:35.373379 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.373388 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:35.373395 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:35.373455 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:35.400097 1771230 cri.go:89] found id: ""
	I1209 05:42:35.400168 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.400183 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:35.400189 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:35.400255 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:35.426999 1771230 cri.go:89] found id: ""
	I1209 05:42:35.427022 1771230 logs.go:282] 0 containers: []
	W1209 05:42:35.427031 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:35.427039 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:35.427051 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:35.443493 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:35.443525 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:35.527330 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:35.527355 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:35.527369 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:35.559224 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:35.559260 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:35.610201 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:35.610231 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:38.185912 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:38.196971 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:38.197055 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:38.231351 1771230 cri.go:89] found id: ""
	I1209 05:42:38.231377 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.231387 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:38.231396 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:38.231464 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:38.259288 1771230 cri.go:89] found id: ""
	I1209 05:42:38.259315 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.259324 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:38.259330 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:38.259393 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:38.287180 1771230 cri.go:89] found id: ""
	I1209 05:42:38.287206 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.287215 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:38.287222 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:38.287285 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:38.315279 1771230 cri.go:89] found id: ""
	I1209 05:42:38.315312 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.315321 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:38.315328 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:38.315391 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:38.343404 1771230 cri.go:89] found id: ""
	I1209 05:42:38.343431 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.343440 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:38.343447 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:38.343508 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:38.370672 1771230 cri.go:89] found id: ""
	I1209 05:42:38.370700 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.370710 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:38.370717 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:38.370779 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:38.400632 1771230 cri.go:89] found id: ""
	I1209 05:42:38.400658 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.400668 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:38.400675 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:38.400737 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:38.426948 1771230 cri.go:89] found id: ""
	I1209 05:42:38.426977 1771230 logs.go:282] 0 containers: []
	W1209 05:42:38.426987 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:38.426996 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:38.427007 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:38.499686 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:38.499727 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:38.520109 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:38.520140 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:38.601858 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:38.601880 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:38.601893 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:38.636564 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:38.636597 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:41.174654 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:41.185337 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:41.185436 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:41.211499 1771230 cri.go:89] found id: ""
	I1209 05:42:41.211525 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.211534 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:41.211541 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:41.211606 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:41.247492 1771230 cri.go:89] found id: ""
	I1209 05:42:41.247515 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.247523 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:41.247529 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:41.247586 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:41.275964 1771230 cri.go:89] found id: ""
	I1209 05:42:41.275987 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.276001 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:41.276008 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:41.276077 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:41.302764 1771230 cri.go:89] found id: ""
	I1209 05:42:41.302786 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.302794 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:41.302801 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:41.302862 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:41.329410 1771230 cri.go:89] found id: ""
	I1209 05:42:41.329432 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.329440 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:41.329447 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:41.329512 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:41.361519 1771230 cri.go:89] found id: ""
	I1209 05:42:41.361548 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.361557 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:41.361564 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:41.361633 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:41.388843 1771230 cri.go:89] found id: ""
	I1209 05:42:41.388866 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.388874 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:41.388881 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:41.388942 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:41.415293 1771230 cri.go:89] found id: ""
	I1209 05:42:41.415317 1771230 logs.go:282] 0 containers: []
	W1209 05:42:41.415325 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:41.415354 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:41.415372 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:41.483684 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:41.483723 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:41.509966 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:41.510002 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:41.588647 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:41.588670 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:41.588684 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:41.628916 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:41.628964 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:44.163513 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:44.174051 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:44.174125 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:44.201563 1771230 cri.go:89] found id: ""
	I1209 05:42:44.201588 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.201598 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:44.201605 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:44.201663 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:44.235790 1771230 cri.go:89] found id: ""
	I1209 05:42:44.235817 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.235826 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:44.235832 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:44.235894 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:44.263519 1771230 cri.go:89] found id: ""
	I1209 05:42:44.263544 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.263553 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:44.263560 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:44.263626 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:44.290623 1771230 cri.go:89] found id: ""
	I1209 05:42:44.290652 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.290661 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:44.290680 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:44.290743 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:44.317848 1771230 cri.go:89] found id: ""
	I1209 05:42:44.317922 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.317944 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:44.317964 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:44.318073 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:44.348452 1771230 cri.go:89] found id: ""
	I1209 05:42:44.348477 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.348486 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:44.348492 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:44.348548 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:44.374915 1771230 cri.go:89] found id: ""
	I1209 05:42:44.374939 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.374948 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:44.374955 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:44.375014 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:44.401540 1771230 cri.go:89] found id: ""
	I1209 05:42:44.401573 1771230 logs.go:282] 0 containers: []
	W1209 05:42:44.401582 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:44.401591 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:44.401612 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:44.469906 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:44.469938 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:44.469951 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:44.502009 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:44.502084 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:44.546230 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:44.546309 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:44.619669 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:44.619708 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:47.141847 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:47.155651 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:47.155739 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:47.189096 1771230 cri.go:89] found id: ""
	I1209 05:42:47.189124 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.189134 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:47.189140 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:47.189204 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:47.227196 1771230 cri.go:89] found id: ""
	I1209 05:42:47.227218 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.227227 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:47.227233 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:47.227288 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:47.261203 1771230 cri.go:89] found id: ""
	I1209 05:42:47.261225 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.261245 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:47.261251 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:47.261311 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:47.301177 1771230 cri.go:89] found id: ""
	I1209 05:42:47.301199 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.301207 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:47.301214 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:47.301273 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:47.333251 1771230 cri.go:89] found id: ""
	I1209 05:42:47.333272 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.333280 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:47.333286 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:47.333358 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:47.369438 1771230 cri.go:89] found id: ""
	I1209 05:42:47.369463 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.369472 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:47.369479 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:47.369547 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:47.404035 1771230 cri.go:89] found id: ""
	I1209 05:42:47.404064 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.404073 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:47.404079 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:47.404135 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:47.441135 1771230 cri.go:89] found id: ""
	I1209 05:42:47.441162 1771230 logs.go:282] 0 containers: []
	W1209 05:42:47.441171 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:47.441180 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:47.441192 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:47.537179 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:47.537259 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:47.557003 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:47.557036 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:47.709113 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:47.709136 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:47.709149 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:47.750790 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:47.750874 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:50.288854 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:50.299348 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:50.299420 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:50.326312 1771230 cri.go:89] found id: ""
	I1209 05:42:50.326340 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.326349 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:50.326357 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:50.326426 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:50.352052 1771230 cri.go:89] found id: ""
	I1209 05:42:50.352075 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.352083 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:50.352090 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:50.352152 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:50.379110 1771230 cri.go:89] found id: ""
	I1209 05:42:50.379135 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.379144 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:50.379151 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:50.379213 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:50.408391 1771230 cri.go:89] found id: ""
	I1209 05:42:50.408419 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.408428 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:50.408442 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:50.408505 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:50.435415 1771230 cri.go:89] found id: ""
	I1209 05:42:50.435441 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.435451 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:50.435457 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:50.435518 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:50.461864 1771230 cri.go:89] found id: ""
	I1209 05:42:50.461938 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.461960 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:50.461984 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:50.462092 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:50.490299 1771230 cri.go:89] found id: ""
	I1209 05:42:50.490326 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.490335 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:50.490341 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:50.490404 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:50.518624 1771230 cri.go:89] found id: ""
	I1209 05:42:50.518649 1771230 logs.go:282] 0 containers: []
	W1209 05:42:50.518658 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:50.518667 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:50.518678 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:50.587559 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:50.587603 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:50.604716 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:50.604794 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:50.671316 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:50.671340 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:50.671353 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:50.703472 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:50.703505 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:53.237040 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:53.247780 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:53.247852 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:53.285842 1771230 cri.go:89] found id: ""
	I1209 05:42:53.285870 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.285884 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:53.285891 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:53.285949 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:53.320964 1771230 cri.go:89] found id: ""
	I1209 05:42:53.320990 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.320999 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:53.321005 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:53.321068 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:53.350963 1771230 cri.go:89] found id: ""
	I1209 05:42:53.350985 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.350994 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:53.351001 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:53.351058 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:53.389265 1771230 cri.go:89] found id: ""
	I1209 05:42:53.389289 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.389299 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:53.389306 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:53.389363 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:53.430214 1771230 cri.go:89] found id: ""
	I1209 05:42:53.430242 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.430250 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:53.430256 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:53.430315 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:53.458932 1771230 cri.go:89] found id: ""
	I1209 05:42:53.458961 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.458971 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:53.458978 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:53.459036 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:53.484225 1771230 cri.go:89] found id: ""
	I1209 05:42:53.484251 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.484260 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:53.484266 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:53.484332 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:53.522363 1771230 cri.go:89] found id: ""
	I1209 05:42:53.522384 1771230 logs.go:282] 0 containers: []
	W1209 05:42:53.522392 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:53.522401 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:53.522411 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:53.570766 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:53.570794 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:53.659539 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:53.659619 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:53.676215 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:53.676293 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:53.784923 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:53.784939 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:53.784951 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:56.319938 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:56.342528 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:56.342694 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:56.449033 1771230 cri.go:89] found id: ""
	I1209 05:42:56.449063 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.449072 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:56.449079 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:56.449143 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:56.536533 1771230 cri.go:89] found id: ""
	I1209 05:42:56.536567 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.536577 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:56.536589 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:56.536656 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:56.599345 1771230 cri.go:89] found id: ""
	I1209 05:42:56.599370 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.599379 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:56.599386 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:56.599442 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:56.627510 1771230 cri.go:89] found id: ""
	I1209 05:42:56.627537 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.627547 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:56.627553 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:56.627616 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:56.669569 1771230 cri.go:89] found id: ""
	I1209 05:42:56.669602 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.669611 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:56.669618 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:56.669677 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:56.711890 1771230 cri.go:89] found id: ""
	I1209 05:42:56.711916 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.711925 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:56.711931 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:56.711991 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:56.765785 1771230 cri.go:89] found id: ""
	I1209 05:42:56.765806 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.765815 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:56.765821 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:56.765883 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:56.808066 1771230 cri.go:89] found id: ""
	I1209 05:42:56.808138 1771230 logs.go:282] 0 containers: []
	W1209 05:42:56.808160 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:56.808182 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:56.808234 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:42:56.970256 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:42:56.970274 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:42:56.970286 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:42:57.005476 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:42:57.005517 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:42:57.049239 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:42:57.049269 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:42:57.119801 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:42:57.119837 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:42:59.638761 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:42:59.650871 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:42:59.650953 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:42:59.688218 1771230 cri.go:89] found id: ""
	I1209 05:42:59.688247 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.688255 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:42:59.688262 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:42:59.688324 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:42:59.719695 1771230 cri.go:89] found id: ""
	I1209 05:42:59.719721 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.719730 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:42:59.719737 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:42:59.719797 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:42:59.747351 1771230 cri.go:89] found id: ""
	I1209 05:42:59.747380 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.747399 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:42:59.747405 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:42:59.747472 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:42:59.777318 1771230 cri.go:89] found id: ""
	I1209 05:42:59.777343 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.777351 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:42:59.777357 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:42:59.777424 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:42:59.807182 1771230 cri.go:89] found id: ""
	I1209 05:42:59.807209 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.807219 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:42:59.807225 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:42:59.807287 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:42:59.856643 1771230 cri.go:89] found id: ""
	I1209 05:42:59.856670 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.856679 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:42:59.856686 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:42:59.856759 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:42:59.895071 1771230 cri.go:89] found id: ""
	I1209 05:42:59.895133 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.895156 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:42:59.895174 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:42:59.895238 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:42:59.921091 1771230 cri.go:89] found id: ""
	I1209 05:42:59.921113 1771230 logs.go:282] 0 containers: []
	W1209 05:42:59.921123 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:42:59.921133 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:42:59.921150 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:00.038349 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:00.038383 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:00.038398 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:00.160469 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:00.160521 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:00.223481 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:00.223521 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:00.324836 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:00.324897 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:02.858532 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:02.871762 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:02.871880 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:02.904445 1771230 cri.go:89] found id: ""
	I1209 05:43:02.904471 1771230 logs.go:282] 0 containers: []
	W1209 05:43:02.904480 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:02.904487 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:02.904550 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:02.932824 1771230 cri.go:89] found id: ""
	I1209 05:43:02.932851 1771230 logs.go:282] 0 containers: []
	W1209 05:43:02.932860 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:02.932867 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:02.932929 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:02.960431 1771230 cri.go:89] found id: ""
	I1209 05:43:02.960456 1771230 logs.go:282] 0 containers: []
	W1209 05:43:02.960465 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:02.960472 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:02.960533 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:02.986876 1771230 cri.go:89] found id: ""
	I1209 05:43:02.986902 1771230 logs.go:282] 0 containers: []
	W1209 05:43:02.986912 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:02.986918 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:02.986976 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:03.024746 1771230 cri.go:89] found id: ""
	I1209 05:43:03.024782 1771230 logs.go:282] 0 containers: []
	W1209 05:43:03.024794 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:03.024803 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:03.024872 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:03.052045 1771230 cri.go:89] found id: ""
	I1209 05:43:03.052072 1771230 logs.go:282] 0 containers: []
	W1209 05:43:03.052081 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:03.052088 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:03.052149 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:03.078985 1771230 cri.go:89] found id: ""
	I1209 05:43:03.079011 1771230 logs.go:282] 0 containers: []
	W1209 05:43:03.079020 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:03.079027 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:03.079086 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:03.106078 1771230 cri.go:89] found id: ""
	I1209 05:43:03.106107 1771230 logs.go:282] 0 containers: []
	W1209 05:43:03.106116 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:03.106126 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:03.106137 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:03.180905 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:03.180926 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:03.180939 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:03.212749 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:03.212783 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:03.247187 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:03.247214 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:03.319308 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:03.319344 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:05.836361 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:05.848437 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:05.848508 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:05.878615 1771230 cri.go:89] found id: ""
	I1209 05:43:05.878638 1771230 logs.go:282] 0 containers: []
	W1209 05:43:05.878647 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:05.878653 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:05.878713 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:05.907505 1771230 cri.go:89] found id: ""
	I1209 05:43:05.907526 1771230 logs.go:282] 0 containers: []
	W1209 05:43:05.907535 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:05.907541 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:05.907609 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:05.933533 1771230 cri.go:89] found id: ""
	I1209 05:43:05.933555 1771230 logs.go:282] 0 containers: []
	W1209 05:43:05.933565 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:05.933572 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:05.933640 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:05.962756 1771230 cri.go:89] found id: ""
	I1209 05:43:05.962779 1771230 logs.go:282] 0 containers: []
	W1209 05:43:05.962787 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:05.962794 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:05.962852 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:05.989415 1771230 cri.go:89] found id: ""
	I1209 05:43:05.989437 1771230 logs.go:282] 0 containers: []
	W1209 05:43:05.989445 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:05.989452 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:05.989513 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:06.021232 1771230 cri.go:89] found id: ""
	I1209 05:43:06.021314 1771230 logs.go:282] 0 containers: []
	W1209 05:43:06.021343 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:06.021366 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:06.021488 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:06.055050 1771230 cri.go:89] found id: ""
	I1209 05:43:06.055075 1771230 logs.go:282] 0 containers: []
	W1209 05:43:06.055085 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:06.055092 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:06.055160 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:06.083058 1771230 cri.go:89] found id: ""
	I1209 05:43:06.083084 1771230 logs.go:282] 0 containers: []
	W1209 05:43:06.083095 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:06.083105 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:06.083119 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:06.115645 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:06.115682 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:06.146276 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:06.146308 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:06.221594 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:06.221634 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:06.238796 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:06.238828 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:06.314748 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:08.816445 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:08.850909 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:08.850985 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:08.907212 1771230 cri.go:89] found id: ""
	I1209 05:43:08.907235 1771230 logs.go:282] 0 containers: []
	W1209 05:43:08.907244 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:08.907250 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:08.907314 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:08.944036 1771230 cri.go:89] found id: ""
	I1209 05:43:08.944062 1771230 logs.go:282] 0 containers: []
	W1209 05:43:08.944071 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:08.944078 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:08.944139 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:08.974191 1771230 cri.go:89] found id: ""
	I1209 05:43:08.974217 1771230 logs.go:282] 0 containers: []
	W1209 05:43:08.974226 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:08.974232 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:08.974293 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:09.001293 1771230 cri.go:89] found id: ""
	I1209 05:43:09.001319 1771230 logs.go:282] 0 containers: []
	W1209 05:43:09.001328 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:09.001337 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:09.001395 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:09.031158 1771230 cri.go:89] found id: ""
	I1209 05:43:09.031186 1771230 logs.go:282] 0 containers: []
	W1209 05:43:09.031195 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:09.031202 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:09.031317 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:09.062431 1771230 cri.go:89] found id: ""
	I1209 05:43:09.062458 1771230 logs.go:282] 0 containers: []
	W1209 05:43:09.062468 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:09.062475 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:09.062536 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:09.089323 1771230 cri.go:89] found id: ""
	I1209 05:43:09.089350 1771230 logs.go:282] 0 containers: []
	W1209 05:43:09.089360 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:09.089366 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:09.089432 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:09.117409 1771230 cri.go:89] found id: ""
	I1209 05:43:09.117437 1771230 logs.go:282] 0 containers: []
	W1209 05:43:09.117447 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:09.117457 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:09.117469 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:09.188199 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:09.188239 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:09.205106 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:09.205135 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:09.273566 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:09.273595 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:09.273608 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:09.306290 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:09.306327 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:11.840305 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:11.859000 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:11.859067 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:11.911554 1771230 cri.go:89] found id: ""
	I1209 05:43:11.911575 1771230 logs.go:282] 0 containers: []
	W1209 05:43:11.911584 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:11.911590 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:11.911648 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:11.948470 1771230 cri.go:89] found id: ""
	I1209 05:43:11.948491 1771230 logs.go:282] 0 containers: []
	W1209 05:43:11.948499 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:11.948505 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:11.948562 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:11.978608 1771230 cri.go:89] found id: ""
	I1209 05:43:11.978635 1771230 logs.go:282] 0 containers: []
	W1209 05:43:11.978644 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:11.978658 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:11.978722 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:12.006883 1771230 cri.go:89] found id: ""
	I1209 05:43:12.006913 1771230 logs.go:282] 0 containers: []
	W1209 05:43:12.006923 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:12.006934 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:12.007011 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:12.043880 1771230 cri.go:89] found id: ""
	I1209 05:43:12.043907 1771230 logs.go:282] 0 containers: []
	W1209 05:43:12.043916 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:12.043923 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:12.043981 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:12.081124 1771230 cri.go:89] found id: ""
	I1209 05:43:12.081145 1771230 logs.go:282] 0 containers: []
	W1209 05:43:12.081166 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:12.081187 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:12.081304 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:12.112586 1771230 cri.go:89] found id: ""
	I1209 05:43:12.112612 1771230 logs.go:282] 0 containers: []
	W1209 05:43:12.112621 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:12.112627 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:12.112688 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:12.149029 1771230 cri.go:89] found id: ""
	I1209 05:43:12.149055 1771230 logs.go:282] 0 containers: []
	W1209 05:43:12.149064 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:12.149074 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:12.149087 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:12.167978 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:12.168009 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:12.257160 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:12.257183 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:12.257197 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:12.291574 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:12.291610 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:12.324424 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:12.324752 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:14.914729 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:14.924648 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:14.924721 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:14.950662 1771230 cri.go:89] found id: ""
	I1209 05:43:14.950684 1771230 logs.go:282] 0 containers: []
	W1209 05:43:14.950693 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:14.950699 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:14.950759 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:14.976793 1771230 cri.go:89] found id: ""
	I1209 05:43:14.976820 1771230 logs.go:282] 0 containers: []
	W1209 05:43:14.976829 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:14.976835 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:14.976894 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:15.017832 1771230 cri.go:89] found id: ""
	I1209 05:43:15.017863 1771230 logs.go:282] 0 containers: []
	W1209 05:43:15.017872 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:15.017879 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:15.017957 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:15.069911 1771230 cri.go:89] found id: ""
	I1209 05:43:15.069936 1771230 logs.go:282] 0 containers: []
	W1209 05:43:15.069944 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:15.069951 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:15.070023 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:15.099633 1771230 cri.go:89] found id: ""
	I1209 05:43:15.099725 1771230 logs.go:282] 0 containers: []
	W1209 05:43:15.099747 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:15.099766 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:15.099883 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:15.128431 1771230 cri.go:89] found id: ""
	I1209 05:43:15.128457 1771230 logs.go:282] 0 containers: []
	W1209 05:43:15.128465 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:15.128471 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:15.128533 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:15.157630 1771230 cri.go:89] found id: ""
	I1209 05:43:15.157656 1771230 logs.go:282] 0 containers: []
	W1209 05:43:15.157665 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:15.157672 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:15.157734 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:15.186366 1771230 cri.go:89] found id: ""
	I1209 05:43:15.186391 1771230 logs.go:282] 0 containers: []
	W1209 05:43:15.186400 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:15.186408 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:15.186420 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:15.257228 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:15.257266 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:15.274291 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:15.274367 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:15.342067 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:15.342141 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:15.342168 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:15.373634 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:15.373669 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:17.903320 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:17.919260 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:17.919334 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:17.989548 1771230 cri.go:89] found id: ""
	I1209 05:43:17.989589 1771230 logs.go:282] 0 containers: []
	W1209 05:43:17.989600 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:17.989622 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:17.989704 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:18.047066 1771230 cri.go:89] found id: ""
	I1209 05:43:18.047097 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.047106 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:18.047113 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:18.047176 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:18.107894 1771230 cri.go:89] found id: ""
	I1209 05:43:18.107924 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.107933 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:18.107940 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:18.108005 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:18.157496 1771230 cri.go:89] found id: ""
	I1209 05:43:18.157525 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.157534 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:18.157540 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:18.157613 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:18.222597 1771230 cri.go:89] found id: ""
	I1209 05:43:18.222626 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.222636 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:18.222643 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:18.222703 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:18.262101 1771230 cri.go:89] found id: ""
	I1209 05:43:18.262130 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.262139 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:18.262146 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:18.262210 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:18.312065 1771230 cri.go:89] found id: ""
	I1209 05:43:18.312095 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.312104 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:18.312110 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:18.312170 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:18.363963 1771230 cri.go:89] found id: ""
	I1209 05:43:18.363991 1771230 logs.go:282] 0 containers: []
	W1209 05:43:18.364000 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:18.364009 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:18.364021 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:18.407669 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:18.407708 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:18.456095 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:18.456163 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:18.573170 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:18.573256 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:18.607767 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:18.607843 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:18.772309 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:21.273161 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:21.301240 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:21.301363 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:21.352431 1771230 cri.go:89] found id: ""
	I1209 05:43:21.352504 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.352526 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:21.352546 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:21.352631 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:21.397255 1771230 cri.go:89] found id: ""
	I1209 05:43:21.397329 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.397353 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:21.397373 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:21.397460 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:21.453112 1771230 cri.go:89] found id: ""
	I1209 05:43:21.453183 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.453206 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:21.453226 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:21.453312 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:21.507970 1771230 cri.go:89] found id: ""
	I1209 05:43:21.508043 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.508066 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:21.508085 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:21.508172 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:21.557732 1771230 cri.go:89] found id: ""
	I1209 05:43:21.557809 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.557833 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:21.557853 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:21.557943 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:21.615637 1771230 cri.go:89] found id: ""
	I1209 05:43:21.615710 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.615733 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:21.615753 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:21.615838 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:21.666235 1771230 cri.go:89] found id: ""
	I1209 05:43:21.666312 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.666348 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:21.666373 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:21.666460 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:21.721971 1771230 cri.go:89] found id: ""
	I1209 05:43:21.722060 1771230 logs.go:282] 0 containers: []
	W1209 05:43:21.722082 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:21.722105 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:21.722141 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:21.758492 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:21.758520 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:21.813256 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:21.813282 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:21.931352 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:21.931459 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:21.974987 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:21.975070 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:22.128034 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:24.628327 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:24.639798 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:24.639867 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:24.666698 1771230 cri.go:89] found id: ""
	I1209 05:43:24.666730 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.666739 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:24.666746 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:24.666806 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:24.705134 1771230 cri.go:89] found id: ""
	I1209 05:43:24.705160 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.705169 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:24.705178 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:24.705237 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:24.748336 1771230 cri.go:89] found id: ""
	I1209 05:43:24.748358 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.748366 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:24.748373 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:24.748439 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:24.781793 1771230 cri.go:89] found id: ""
	I1209 05:43:24.781823 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.781831 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:24.781838 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:24.781906 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:24.818604 1771230 cri.go:89] found id: ""
	I1209 05:43:24.818627 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.818636 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:24.818663 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:24.818727 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:24.848228 1771230 cri.go:89] found id: ""
	I1209 05:43:24.848265 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.848274 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:24.848282 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:24.848356 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:24.893486 1771230 cri.go:89] found id: ""
	I1209 05:43:24.893511 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.893521 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:24.893575 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:24.893645 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:24.933184 1771230 cri.go:89] found id: ""
	I1209 05:43:24.933208 1771230 logs.go:282] 0 containers: []
	W1209 05:43:24.933216 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:24.933225 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:24.933236 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:24.977668 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:24.977703 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:25.019114 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:25.019149 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:25.104421 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:25.104460 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:25.145033 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:25.145066 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:25.247810 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:27.748080 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:27.760827 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:27.761204 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:27.795658 1771230 cri.go:89] found id: ""
	I1209 05:43:27.795703 1771230 logs.go:282] 0 containers: []
	W1209 05:43:27.795715 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:27.795724 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:27.795871 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:27.824577 1771230 cri.go:89] found id: ""
	I1209 05:43:27.824599 1771230 logs.go:282] 0 containers: []
	W1209 05:43:27.824608 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:27.824615 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:27.824673 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:27.860226 1771230 cri.go:89] found id: ""
	I1209 05:43:27.860249 1771230 logs.go:282] 0 containers: []
	W1209 05:43:27.860258 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:27.860264 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:27.860325 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:27.897535 1771230 cri.go:89] found id: ""
	I1209 05:43:27.897575 1771230 logs.go:282] 0 containers: []
	W1209 05:43:27.897584 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:27.897591 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:27.897651 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:27.948880 1771230 cri.go:89] found id: ""
	I1209 05:43:27.948912 1771230 logs.go:282] 0 containers: []
	W1209 05:43:27.948923 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:27.948929 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:27.948987 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:27.990804 1771230 cri.go:89] found id: ""
	I1209 05:43:27.990830 1771230 logs.go:282] 0 containers: []
	W1209 05:43:27.990839 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:27.990846 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:27.990906 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:28.024553 1771230 cri.go:89] found id: ""
	I1209 05:43:28.024578 1771230 logs.go:282] 0 containers: []
	W1209 05:43:28.024587 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:28.024594 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:28.024656 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:28.063051 1771230 cri.go:89] found id: ""
	I1209 05:43:28.063077 1771230 logs.go:282] 0 containers: []
	W1209 05:43:28.063087 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:28.063096 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:28.063109 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:28.192067 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:28.192147 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:28.213125 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:28.213153 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:28.282120 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:28.282136 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:28.282148 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:28.313036 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:28.313111 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:30.847251 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:30.860160 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:30.860228 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:30.909799 1771230 cri.go:89] found id: ""
	I1209 05:43:30.909821 1771230 logs.go:282] 0 containers: []
	W1209 05:43:30.909829 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:30.909836 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:30.909900 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:30.943162 1771230 cri.go:89] found id: ""
	I1209 05:43:30.943182 1771230 logs.go:282] 0 containers: []
	W1209 05:43:30.943190 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:30.943196 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:30.943257 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:30.978621 1771230 cri.go:89] found id: ""
	I1209 05:43:30.978697 1771230 logs.go:282] 0 containers: []
	W1209 05:43:30.978719 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:30.978738 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:30.978830 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:31.012127 1771230 cri.go:89] found id: ""
	I1209 05:43:31.012151 1771230 logs.go:282] 0 containers: []
	W1209 05:43:31.012160 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:31.012167 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:31.012233 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:31.048416 1771230 cri.go:89] found id: ""
	I1209 05:43:31.048438 1771230 logs.go:282] 0 containers: []
	W1209 05:43:31.048446 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:31.048452 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:31.048513 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:31.099815 1771230 cri.go:89] found id: ""
	I1209 05:43:31.099844 1771230 logs.go:282] 0 containers: []
	W1209 05:43:31.099863 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:31.099871 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:31.099932 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:31.179715 1771230 cri.go:89] found id: ""
	I1209 05:43:31.179781 1771230 logs.go:282] 0 containers: []
	W1209 05:43:31.179792 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:31.179799 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:31.179873 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:31.246380 1771230 cri.go:89] found id: ""
	I1209 05:43:31.246456 1771230 logs.go:282] 0 containers: []
	W1209 05:43:31.246479 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:31.246500 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:31.246525 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:31.338861 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:31.338946 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:31.356670 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:31.356696 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:31.463249 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:31.463269 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:31.463283 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:31.494624 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:31.494662 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:34.042742 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:34.059079 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:34.059153 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:34.115031 1771230 cri.go:89] found id: ""
	I1209 05:43:34.115054 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.115063 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:34.115069 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:34.115135 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:34.165429 1771230 cri.go:89] found id: ""
	I1209 05:43:34.165452 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.165461 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:34.165468 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:34.165531 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:34.209450 1771230 cri.go:89] found id: ""
	I1209 05:43:34.209492 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.209503 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:34.209510 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:34.209580 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:34.259567 1771230 cri.go:89] found id: ""
	I1209 05:43:34.259589 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.259597 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:34.259605 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:34.259664 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:34.299877 1771230 cri.go:89] found id: ""
	I1209 05:43:34.299956 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.299968 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:34.299974 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:34.300035 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:34.352097 1771230 cri.go:89] found id: ""
	I1209 05:43:34.352126 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.352134 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:34.352141 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:34.352212 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:34.390693 1771230 cri.go:89] found id: ""
	I1209 05:43:34.390715 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.390724 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:34.390731 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:34.390792 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:34.439593 1771230 cri.go:89] found id: ""
	I1209 05:43:34.439615 1771230 logs.go:282] 0 containers: []
	W1209 05:43:34.439623 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:34.439632 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:34.439644 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:34.532781 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:34.532860 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:34.551053 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:34.551122 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:34.637903 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:34.637969 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:34.637996 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:34.673943 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:34.674021 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:37.207496 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:37.217798 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:37.217868 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:37.249601 1771230 cri.go:89] found id: ""
	I1209 05:43:37.249623 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.249631 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:37.249638 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:37.249705 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:37.282677 1771230 cri.go:89] found id: ""
	I1209 05:43:37.282697 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.282706 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:37.282712 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:37.282772 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:37.335876 1771230 cri.go:89] found id: ""
	I1209 05:43:37.335897 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.335906 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:37.335913 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:37.335979 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:37.367212 1771230 cri.go:89] found id: ""
	I1209 05:43:37.367232 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.367241 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:37.367248 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:37.367309 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:37.404747 1771230 cri.go:89] found id: ""
	I1209 05:43:37.404825 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.404847 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:37.404865 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:37.404970 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:37.443104 1771230 cri.go:89] found id: ""
	I1209 05:43:37.443125 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.443133 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:37.443140 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:37.443200 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:37.474828 1771230 cri.go:89] found id: ""
	I1209 05:43:37.474852 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.474860 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:37.474867 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:37.474929 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:37.507176 1771230 cri.go:89] found id: ""
	I1209 05:43:37.507198 1771230 logs.go:282] 0 containers: []
	W1209 05:43:37.507207 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:37.507215 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:37.507227 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:37.580975 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:37.580995 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:37.581008 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:37.619522 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:37.619562 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:37.661513 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:37.661549 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:37.738408 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:37.738483 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:40.255776 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:40.270246 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:40.270315 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:40.306461 1771230 cri.go:89] found id: ""
	I1209 05:43:40.306482 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.306491 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:40.306497 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:40.306555 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:40.365988 1771230 cri.go:89] found id: ""
	I1209 05:43:40.366010 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.366019 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:40.366025 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:40.366081 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:40.410835 1771230 cri.go:89] found id: ""
	I1209 05:43:40.410856 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.410864 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:40.410870 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:40.410928 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:40.446940 1771230 cri.go:89] found id: ""
	I1209 05:43:40.446962 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.446971 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:40.446977 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:40.447038 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:40.487728 1771230 cri.go:89] found id: ""
	I1209 05:43:40.487754 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.487764 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:40.487771 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:40.487829 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:40.528404 1771230 cri.go:89] found id: ""
	I1209 05:43:40.528431 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.528440 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:40.528452 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:40.528513 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:40.564949 1771230 cri.go:89] found id: ""
	I1209 05:43:40.564970 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.564979 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:40.564986 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:40.565051 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:40.603384 1771230 cri.go:89] found id: ""
	I1209 05:43:40.603406 1771230 logs.go:282] 0 containers: []
	W1209 05:43:40.603415 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:40.603423 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:40.603434 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:40.687322 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:40.687402 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:40.709071 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:40.709096 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:40.794418 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:40.794494 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:40.794522 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:40.833576 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:40.833613 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.385470 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:43.402740 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:43.402819 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:43.478266 1771230 cri.go:89] found id: ""
	I1209 05:43:43.478288 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.478297 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:43.478303 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:43.478371 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:43.516981 1771230 cri.go:89] found id: ""
	I1209 05:43:43.517004 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.517013 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:43.517019 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:43.517077 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:43.546563 1771230 cri.go:89] found id: ""
	I1209 05:43:43.546663 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.546685 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:43.546704 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:43.546813 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:43.575514 1771230 cri.go:89] found id: ""
	I1209 05:43:43.575534 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.575542 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:43.575549 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:43.575606 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:43.615107 1771230 cri.go:89] found id: ""
	I1209 05:43:43.615129 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.615137 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:43.615144 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:43.615203 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:43.644437 1771230 cri.go:89] found id: ""
	I1209 05:43:43.644460 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.644468 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:43.644475 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:43.644541 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:43.681097 1771230 cri.go:89] found id: ""
	I1209 05:43:43.681119 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.681127 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:43.681133 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:43.681192 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:43.721458 1771230 cri.go:89] found id: ""
	I1209 05:43:43.721533 1771230 logs.go:282] 0 containers: []
	W1209 05:43:43.721555 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:43.721602 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:43.721633 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:43.804267 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:43.804345 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:43.804372 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:43.844229 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:43.844262 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:43.891453 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:43.891533 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:43.965354 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:43.965445 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:46.486604 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:46.497151 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:46.497241 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:46.548929 1771230 cri.go:89] found id: ""
	I1209 05:43:46.548954 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.548966 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:46.548973 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:46.549030 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:46.588967 1771230 cri.go:89] found id: ""
	I1209 05:43:46.588991 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.588999 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:46.589005 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:46.589062 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:46.633176 1771230 cri.go:89] found id: ""
	I1209 05:43:46.633201 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.633210 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:46.633219 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:46.633293 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:46.662820 1771230 cri.go:89] found id: ""
	I1209 05:43:46.662848 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.662858 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:46.662867 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:46.662935 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:46.691574 1771230 cri.go:89] found id: ""
	I1209 05:43:46.691600 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.691609 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:46.691616 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:46.691677 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:46.728362 1771230 cri.go:89] found id: ""
	I1209 05:43:46.728387 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.728396 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:46.728403 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:46.728460 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:46.760343 1771230 cri.go:89] found id: ""
	I1209 05:43:46.760409 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.760443 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:46.760464 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:46.760543 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:46.794282 1771230 cri.go:89] found id: ""
	I1209 05:43:46.794356 1771230 logs.go:282] 0 containers: []
	W1209 05:43:46.794379 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:46.794403 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:46.794430 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:46.836810 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:46.836901 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:46.922760 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:46.922784 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:47.011951 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:47.012064 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:47.031792 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:47.031879 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:47.115346 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:49.615819 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:49.627862 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:49.627939 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:49.666910 1771230 cri.go:89] found id: ""
	I1209 05:43:49.666994 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.667026 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:49.667067 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:49.667175 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:49.702117 1771230 cri.go:89] found id: ""
	I1209 05:43:49.702195 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.702219 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:49.702239 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:49.702360 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:49.740326 1771230 cri.go:89] found id: ""
	I1209 05:43:49.740418 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.740441 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:49.740481 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:49.740598 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:49.777138 1771230 cri.go:89] found id: ""
	I1209 05:43:49.777218 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.777241 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:49.777259 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:49.777349 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:49.817035 1771230 cri.go:89] found id: ""
	I1209 05:43:49.817112 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.817150 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:49.817174 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:49.817262 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:49.885900 1771230 cri.go:89] found id: ""
	I1209 05:43:49.885980 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.886018 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:49.886042 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:49.886131 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:49.918374 1771230 cri.go:89] found id: ""
	I1209 05:43:49.918449 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.918471 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:49.918490 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:49.918587 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:49.950312 1771230 cri.go:89] found id: ""
	I1209 05:43:49.950386 1771230 logs.go:282] 0 containers: []
	W1209 05:43:49.950408 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:49.950433 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:49.950483 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:49.985763 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:49.985796 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:50.022082 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:50.022167 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:50.115911 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:50.116070 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:50.153775 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:50.153803 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:50.255785 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:52.756510 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:52.767623 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:52.767693 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:52.796032 1771230 cri.go:89] found id: ""
	I1209 05:43:52.796055 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.796065 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:52.796070 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:52.796140 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:52.822085 1771230 cri.go:89] found id: ""
	I1209 05:43:52.822112 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.822121 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:52.822127 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:52.822192 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:52.851225 1771230 cri.go:89] found id: ""
	I1209 05:43:52.851248 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.851257 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:52.851264 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:52.851325 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:52.877287 1771230 cri.go:89] found id: ""
	I1209 05:43:52.877313 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.877321 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:52.877328 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:52.877387 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:52.904070 1771230 cri.go:89] found id: ""
	I1209 05:43:52.904096 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.904104 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:52.904111 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:52.904173 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:52.932193 1771230 cri.go:89] found id: ""
	I1209 05:43:52.932218 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.932227 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:52.932233 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:52.932291 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:52.962112 1771230 cri.go:89] found id: ""
	I1209 05:43:52.962137 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.962146 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:52.962152 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:52.962212 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:52.988700 1771230 cri.go:89] found id: ""
	I1209 05:43:52.988723 1771230 logs.go:282] 0 containers: []
	W1209 05:43:52.988732 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:52.988741 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:52.988752 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:53.057618 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:53.057640 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:53.057653 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:53.093540 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:53.093577 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:53.131113 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:53.131138 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:53.226956 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:53.226999 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:55.750563 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:55.767116 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:55.767241 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:55.821571 1771230 cri.go:89] found id: ""
	I1209 05:43:55.821593 1771230 logs.go:282] 0 containers: []
	W1209 05:43:55.821603 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:55.821609 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:55.821672 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:55.868711 1771230 cri.go:89] found id: ""
	I1209 05:43:55.868733 1771230 logs.go:282] 0 containers: []
	W1209 05:43:55.868742 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:55.868747 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:55.868810 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:55.920856 1771230 cri.go:89] found id: ""
	I1209 05:43:55.920878 1771230 logs.go:282] 0 containers: []
	W1209 05:43:55.920887 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:55.920894 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:55.920953 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:55.971559 1771230 cri.go:89] found id: ""
	I1209 05:43:55.971585 1771230 logs.go:282] 0 containers: []
	W1209 05:43:55.971598 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:55.971605 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:55.971662 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:56.022085 1771230 cri.go:89] found id: ""
	I1209 05:43:56.022112 1771230 logs.go:282] 0 containers: []
	W1209 05:43:56.022122 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:56.022128 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:56.022197 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:56.067417 1771230 cri.go:89] found id: ""
	I1209 05:43:56.067443 1771230 logs.go:282] 0 containers: []
	W1209 05:43:56.067452 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:56.067458 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:56.067560 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:56.110232 1771230 cri.go:89] found id: ""
	I1209 05:43:56.110264 1771230 logs.go:282] 0 containers: []
	W1209 05:43:56.110275 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:56.110284 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:56.110393 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:56.169581 1771230 cri.go:89] found id: ""
	I1209 05:43:56.169608 1771230 logs.go:282] 0 containers: []
	W1209 05:43:56.169617 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:56.169644 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:56.169664 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:56.239763 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:56.239789 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:56.343589 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:56.343626 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:56.361579 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:56.361608 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:56.477791 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:43:56.477813 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:56.477829 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:59.031971 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:43:59.043341 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:43:59.043414 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:43:59.075613 1771230 cri.go:89] found id: ""
	I1209 05:43:59.075637 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.075646 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:43:59.075652 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:43:59.075717 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:43:59.120129 1771230 cri.go:89] found id: ""
	I1209 05:43:59.120157 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.120166 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:43:59.120173 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:43:59.120239 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:43:59.149457 1771230 cri.go:89] found id: ""
	I1209 05:43:59.149484 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.149493 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:43:59.149501 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:43:59.149573 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:43:59.187567 1771230 cri.go:89] found id: ""
	I1209 05:43:59.187595 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.187604 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:43:59.187611 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:43:59.187678 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:43:59.231229 1771230 cri.go:89] found id: ""
	I1209 05:43:59.231252 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.231260 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:43:59.231267 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:43:59.231386 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:43:59.266193 1771230 cri.go:89] found id: ""
	I1209 05:43:59.266215 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.266224 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:43:59.266231 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:43:59.266297 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:43:59.302813 1771230 cri.go:89] found id: ""
	I1209 05:43:59.302836 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.302845 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:43:59.302852 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:43:59.302914 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:43:59.342567 1771230 cri.go:89] found id: ""
	I1209 05:43:59.342610 1771230 logs.go:282] 0 containers: []
	W1209 05:43:59.342619 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:43:59.342627 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:43:59.342639 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:43:59.401534 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:43:59.401579 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:43:59.460756 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:43:59.460783 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:43:59.549394 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:43:59.549476 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:43:59.567726 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:43:59.567803 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:43:59.659621 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:02.160308 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:02.176153 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:02.176241 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:02.234144 1771230 cri.go:89] found id: ""
	I1209 05:44:02.234173 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.234182 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:02.234190 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:02.234258 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:02.284798 1771230 cri.go:89] found id: ""
	I1209 05:44:02.284875 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.284898 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:02.284916 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:02.285044 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:02.329291 1771230 cri.go:89] found id: ""
	I1209 05:44:02.329326 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.329336 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:02.329342 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:02.329414 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:02.378385 1771230 cri.go:89] found id: ""
	I1209 05:44:02.378407 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.378415 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:02.378422 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:02.378482 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:02.420756 1771230 cri.go:89] found id: ""
	I1209 05:44:02.420779 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.420787 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:02.420794 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:02.420860 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:02.463890 1771230 cri.go:89] found id: ""
	I1209 05:44:02.463911 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.463920 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:02.463926 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:02.463987 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:02.546446 1771230 cri.go:89] found id: ""
	I1209 05:44:02.546534 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.546547 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:02.546554 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:02.546634 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:02.655559 1771230 cri.go:89] found id: ""
	I1209 05:44:02.655581 1771230 logs.go:282] 0 containers: []
	W1209 05:44:02.655590 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:02.655599 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:02.655611 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:02.757245 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:02.757329 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:02.780811 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:02.780897 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:02.876971 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:02.877034 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:02.877063 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:02.932556 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:02.932661 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:05.491760 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:05.504235 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:05.504304 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:05.551642 1771230 cri.go:89] found id: ""
	I1209 05:44:05.551664 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.551673 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:05.551680 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:05.551737 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:05.605295 1771230 cri.go:89] found id: ""
	I1209 05:44:05.605319 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.605328 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:05.605334 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:05.605397 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:05.648545 1771230 cri.go:89] found id: ""
	I1209 05:44:05.648682 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.648702 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:05.648712 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:05.648788 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:05.687022 1771230 cri.go:89] found id: ""
	I1209 05:44:05.687136 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.687154 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:05.687162 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:05.687236 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:05.721833 1771230 cri.go:89] found id: ""
	I1209 05:44:05.721857 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.721866 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:05.721872 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:05.721942 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:05.754211 1771230 cri.go:89] found id: ""
	I1209 05:44:05.754235 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.754243 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:05.754249 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:05.754305 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:05.792833 1771230 cri.go:89] found id: ""
	I1209 05:44:05.792860 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.792870 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:05.792876 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:05.792937 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:05.849941 1771230 cri.go:89] found id: ""
	I1209 05:44:05.849970 1771230 logs.go:282] 0 containers: []
	W1209 05:44:05.849978 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:05.849986 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:05.849999 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:05.979215 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:05.979241 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:05.979253 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:06.027651 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:06.027745 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:06.064264 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:06.064291 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:06.138863 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:06.138947 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:08.657898 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:08.668138 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:08.668218 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:08.693787 1771230 cri.go:89] found id: ""
	I1209 05:44:08.693814 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.693823 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:08.693830 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:08.693891 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:08.719456 1771230 cri.go:89] found id: ""
	I1209 05:44:08.719482 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.719498 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:08.719504 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:08.719565 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:08.745982 1771230 cri.go:89] found id: ""
	I1209 05:44:08.746007 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.746017 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:08.746024 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:08.746082 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:08.772129 1771230 cri.go:89] found id: ""
	I1209 05:44:08.772154 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.772163 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:08.772170 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:08.772229 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:08.796808 1771230 cri.go:89] found id: ""
	I1209 05:44:08.796833 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.796843 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:08.796850 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:08.796915 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:08.822851 1771230 cri.go:89] found id: ""
	I1209 05:44:08.822874 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.822883 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:08.822890 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:08.822956 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:08.847811 1771230 cri.go:89] found id: ""
	I1209 05:44:08.847834 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.847842 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:08.847848 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:08.847911 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:08.877131 1771230 cri.go:89] found id: ""
	I1209 05:44:08.877205 1771230 logs.go:282] 0 containers: []
	W1209 05:44:08.877228 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:08.877250 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:08.877275 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:08.948884 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:08.948922 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:08.965416 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:08.965447 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:09.030454 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:09.030482 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:09.030494 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:09.062096 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:09.062133 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:11.604334 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:11.614734 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:11.614798 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:11.648572 1771230 cri.go:89] found id: ""
	I1209 05:44:11.648597 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.648605 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:11.648611 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:11.648673 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:11.684344 1771230 cri.go:89] found id: ""
	I1209 05:44:11.684366 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.684374 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:11.684381 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:11.684439 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:11.711521 1771230 cri.go:89] found id: ""
	I1209 05:44:11.711544 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.711553 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:11.711560 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:11.711619 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:11.740573 1771230 cri.go:89] found id: ""
	I1209 05:44:11.740654 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.740677 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:11.740695 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:11.740809 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:11.779411 1771230 cri.go:89] found id: ""
	I1209 05:44:11.779432 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.779440 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:11.779447 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:11.779504 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:11.817350 1771230 cri.go:89] found id: ""
	I1209 05:44:11.817372 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.817381 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:11.817387 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:11.817448 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:11.847752 1771230 cri.go:89] found id: ""
	I1209 05:44:11.847773 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.847781 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:11.847788 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:11.847851 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:11.876569 1771230 cri.go:89] found id: ""
	I1209 05:44:11.876592 1771230 logs.go:282] 0 containers: []
	W1209 05:44:11.876601 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:11.876610 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:11.876622 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:11.959349 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:11.959366 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:11.959378 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:11.998521 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:11.998559 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:12.045228 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:12.045257 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:12.158128 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:12.158210 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:14.687881 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:14.698204 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:14.698273 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:14.729541 1771230 cri.go:89] found id: ""
	I1209 05:44:14.729573 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.729581 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:14.729588 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:14.729646 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:14.766805 1771230 cri.go:89] found id: ""
	I1209 05:44:14.766826 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.766834 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:14.766840 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:14.766910 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:14.797589 1771230 cri.go:89] found id: ""
	I1209 05:44:14.797613 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.797621 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:14.797634 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:14.797713 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:14.828166 1771230 cri.go:89] found id: ""
	I1209 05:44:14.828187 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.828195 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:14.828201 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:14.828262 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:14.864233 1771230 cri.go:89] found id: ""
	I1209 05:44:14.864254 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.864261 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:14.864271 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:14.864331 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:14.900291 1771230 cri.go:89] found id: ""
	I1209 05:44:14.900312 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.900320 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:14.900327 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:14.900383 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:14.932737 1771230 cri.go:89] found id: ""
	I1209 05:44:14.932758 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.932766 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:14.932773 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:14.932831 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:14.960281 1771230 cri.go:89] found id: ""
	I1209 05:44:14.960301 1771230 logs.go:282] 0 containers: []
	W1209 05:44:14.960309 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:14.960320 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:14.960336 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:14.978789 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:14.978875 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:15.067757 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:15.067774 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:15.067788 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:15.146882 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:15.146966 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:15.197195 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:15.197273 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:17.775315 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:17.789260 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:17.789336 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:17.834973 1771230 cri.go:89] found id: ""
	I1209 05:44:17.835010 1771230 logs.go:282] 0 containers: []
	W1209 05:44:17.835019 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:17.835026 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:17.835088 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:17.881027 1771230 cri.go:89] found id: ""
	I1209 05:44:17.881055 1771230 logs.go:282] 0 containers: []
	W1209 05:44:17.881065 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:17.881071 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:17.881133 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:17.924097 1771230 cri.go:89] found id: ""
	I1209 05:44:17.924124 1771230 logs.go:282] 0 containers: []
	W1209 05:44:17.924132 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:17.924139 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:17.924213 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:17.991994 1771230 cri.go:89] found id: ""
	I1209 05:44:17.992020 1771230 logs.go:282] 0 containers: []
	W1209 05:44:17.992029 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:17.992039 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:17.992103 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:18.037027 1771230 cri.go:89] found id: ""
	I1209 05:44:18.037055 1771230 logs.go:282] 0 containers: []
	W1209 05:44:18.037063 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:18.037070 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:18.037134 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:18.068048 1771230 cri.go:89] found id: ""
	I1209 05:44:18.068091 1771230 logs.go:282] 0 containers: []
	W1209 05:44:18.068101 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:18.068108 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:18.068178 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:18.134517 1771230 cri.go:89] found id: ""
	I1209 05:44:18.134543 1771230 logs.go:282] 0 containers: []
	W1209 05:44:18.134553 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:18.134559 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:18.134649 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:18.201163 1771230 cri.go:89] found id: ""
	I1209 05:44:18.201189 1771230 logs.go:282] 0 containers: []
	W1209 05:44:18.201198 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:18.201207 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:18.201221 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:18.248274 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:18.248305 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:18.361370 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:18.361409 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:18.400462 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:18.400491 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:18.521278 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:18.521300 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:18.521320 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:21.064780 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:21.076838 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:21.076913 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:21.112856 1771230 cri.go:89] found id: ""
	I1209 05:44:21.112882 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.112892 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:21.112898 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:21.112956 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:21.140005 1771230 cri.go:89] found id: ""
	I1209 05:44:21.140034 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.140043 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:21.140050 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:21.140110 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:21.168183 1771230 cri.go:89] found id: ""
	I1209 05:44:21.168207 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.168216 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:21.168223 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:21.168287 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:21.193357 1771230 cri.go:89] found id: ""
	I1209 05:44:21.193379 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.193389 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:21.193395 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:21.193454 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:21.221049 1771230 cri.go:89] found id: ""
	I1209 05:44:21.221071 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.221079 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:21.221086 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:21.221145 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:21.245714 1771230 cri.go:89] found id: ""
	I1209 05:44:21.245737 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.245745 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:21.245752 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:21.245813 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:21.273051 1771230 cri.go:89] found id: ""
	I1209 05:44:21.273073 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.273081 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:21.273087 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:21.273146 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:21.299298 1771230 cri.go:89] found id: ""
	I1209 05:44:21.299320 1771230 logs.go:282] 0 containers: []
	W1209 05:44:21.299329 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:21.299337 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:21.299351 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:21.360304 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:21.360376 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:21.360399 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:21.392584 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:21.392621 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:21.420494 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:21.420520 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:21.491090 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:21.491127 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:24.008331 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:24.021043 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:24.021216 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:24.048864 1771230 cri.go:89] found id: ""
	I1209 05:44:24.048889 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.048898 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:24.048904 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:24.048965 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:24.088308 1771230 cri.go:89] found id: ""
	I1209 05:44:24.088342 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.088352 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:24.088359 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:24.088428 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:24.121977 1771230 cri.go:89] found id: ""
	I1209 05:44:24.122011 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.122021 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:24.122027 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:24.122097 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:24.155842 1771230 cri.go:89] found id: ""
	I1209 05:44:24.155869 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.155879 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:24.155885 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:24.155971 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:24.192277 1771230 cri.go:89] found id: ""
	I1209 05:44:24.192302 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.192312 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:24.192319 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:24.192376 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:24.222740 1771230 cri.go:89] found id: ""
	I1209 05:44:24.222774 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.222783 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:24.222789 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:24.222859 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:24.254219 1771230 cri.go:89] found id: ""
	I1209 05:44:24.254252 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.254262 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:24.254268 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:24.254340 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:24.281655 1771230 cri.go:89] found id: ""
	I1209 05:44:24.281681 1771230 logs.go:282] 0 containers: []
	W1209 05:44:24.281690 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:24.281699 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:24.281711 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:24.350859 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:24.350898 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:24.367975 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:24.368004 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:24.432988 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:24.433012 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:24.433026 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:24.465312 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:24.465348 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:26.993531 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:27.004065 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:27.004158 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:27.032658 1771230 cri.go:89] found id: ""
	I1209 05:44:27.032732 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.032771 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:27.032787 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:27.032882 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:27.059460 1771230 cri.go:89] found id: ""
	I1209 05:44:27.059496 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.059505 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:27.059528 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:27.059616 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:27.096676 1771230 cri.go:89] found id: ""
	I1209 05:44:27.096710 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.096719 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:27.096742 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:27.096831 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:27.126725 1771230 cri.go:89] found id: ""
	I1209 05:44:27.126751 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.126760 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:27.126766 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:27.126861 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:27.157356 1771230 cri.go:89] found id: ""
	I1209 05:44:27.157394 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.157404 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:27.157411 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:27.157518 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:27.184001 1771230 cri.go:89] found id: ""
	I1209 05:44:27.184026 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.184035 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:27.184043 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:27.184102 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:27.210883 1771230 cri.go:89] found id: ""
	I1209 05:44:27.210909 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.210929 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:27.210937 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:27.211010 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:27.240374 1771230 cri.go:89] found id: ""
	I1209 05:44:27.240400 1771230 logs.go:282] 0 containers: []
	W1209 05:44:27.240409 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:27.240418 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:27.240450 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:27.310172 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:27.310210 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:27.327596 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:27.327628 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:27.388052 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:27.388071 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:27.388084 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:27.418940 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:27.418974 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:29.949444 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:29.959707 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:29.959778 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:29.985749 1771230 cri.go:89] found id: ""
	I1209 05:44:29.985817 1771230 logs.go:282] 0 containers: []
	W1209 05:44:29.985831 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:29.985838 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:29.985903 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:30.027656 1771230 cri.go:89] found id: ""
	I1209 05:44:30.027691 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.027700 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:30.027706 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:30.027779 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:30.063341 1771230 cri.go:89] found id: ""
	I1209 05:44:30.063377 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.063387 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:30.063394 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:30.063476 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:30.101401 1771230 cri.go:89] found id: ""
	I1209 05:44:30.101439 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.101450 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:30.101459 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:30.101536 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:30.143830 1771230 cri.go:89] found id: ""
	I1209 05:44:30.143875 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.143886 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:30.143893 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:30.143962 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:30.173191 1771230 cri.go:89] found id: ""
	I1209 05:44:30.173233 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.173243 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:30.173249 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:30.173323 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:30.204311 1771230 cri.go:89] found id: ""
	I1209 05:44:30.204338 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.204347 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:30.204354 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:30.204437 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:30.231016 1771230 cri.go:89] found id: ""
	I1209 05:44:30.231042 1771230 logs.go:282] 0 containers: []
	W1209 05:44:30.231051 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:30.231061 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:30.231100 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:30.293794 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:30.293827 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:30.293842 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:30.324526 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:30.324558 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:30.352355 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:30.352381 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:30.423821 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:30.423858 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:32.940590 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:32.950684 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:32.950761 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:32.980562 1771230 cri.go:89] found id: ""
	I1209 05:44:32.980584 1771230 logs.go:282] 0 containers: []
	W1209 05:44:32.980596 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:32.980602 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:32.980661 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:33.007215 1771230 cri.go:89] found id: ""
	I1209 05:44:33.007291 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.007316 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:33.007336 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:33.007432 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:33.042864 1771230 cri.go:89] found id: ""
	I1209 05:44:33.042899 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.042909 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:33.042915 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:33.042978 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:33.074867 1771230 cri.go:89] found id: ""
	I1209 05:44:33.074889 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.074898 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:33.074904 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:33.074963 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:33.113306 1771230 cri.go:89] found id: ""
	I1209 05:44:33.113328 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.113336 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:33.113342 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:33.113406 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:33.147222 1771230 cri.go:89] found id: ""
	I1209 05:44:33.147244 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.147253 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:33.147260 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:33.147321 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:33.176879 1771230 cri.go:89] found id: ""
	I1209 05:44:33.176902 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.176911 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:33.176918 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:33.176976 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:33.204311 1771230 cri.go:89] found id: ""
	I1209 05:44:33.204334 1771230 logs.go:282] 0 containers: []
	W1209 05:44:33.204343 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:33.204351 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:33.204363 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:33.265280 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:33.265355 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:33.265381 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:33.295695 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:33.295724 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:33.326145 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:33.326172 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:33.392929 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:33.392966 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:35.910472 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:35.921108 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:35.921183 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:35.951914 1771230 cri.go:89] found id: ""
	I1209 05:44:35.951982 1771230 logs.go:282] 0 containers: []
	W1209 05:44:35.952004 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:35.952022 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:35.952115 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:35.977812 1771230 cri.go:89] found id: ""
	I1209 05:44:35.977835 1771230 logs.go:282] 0 containers: []
	W1209 05:44:35.977844 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:35.977850 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:35.977911 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:36.017825 1771230 cri.go:89] found id: ""
	I1209 05:44:36.017855 1771230 logs.go:282] 0 containers: []
	W1209 05:44:36.017871 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:36.017878 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:36.017947 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:36.046032 1771230 cri.go:89] found id: ""
	I1209 05:44:36.046054 1771230 logs.go:282] 0 containers: []
	W1209 05:44:36.046063 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:36.046070 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:36.046132 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:36.076602 1771230 cri.go:89] found id: ""
	I1209 05:44:36.076632 1771230 logs.go:282] 0 containers: []
	W1209 05:44:36.076642 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:36.076648 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:36.076712 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:36.121294 1771230 cri.go:89] found id: ""
	I1209 05:44:36.121322 1771230 logs.go:282] 0 containers: []
	W1209 05:44:36.121331 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:36.121337 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:36.121397 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:36.157665 1771230 cri.go:89] found id: ""
	I1209 05:44:36.157692 1771230 logs.go:282] 0 containers: []
	W1209 05:44:36.157701 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:36.157708 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:36.157784 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:36.184046 1771230 cri.go:89] found id: ""
	I1209 05:44:36.184068 1771230 logs.go:282] 0 containers: []
	W1209 05:44:36.184076 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:36.184084 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:36.184096 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:36.245914 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:36.245937 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:36.245950 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:36.278355 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:36.278390 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:36.307607 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:36.307640 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:36.378514 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:36.378551 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:38.895538 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:38.907751 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:38.907869 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:38.941192 1771230 cri.go:89] found id: ""
	I1209 05:44:38.941267 1771230 logs.go:282] 0 containers: []
	W1209 05:44:38.941292 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:38.941313 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:38.941425 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:38.980525 1771230 cri.go:89] found id: ""
	I1209 05:44:38.980594 1771230 logs.go:282] 0 containers: []
	W1209 05:44:38.980630 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:38.980655 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:38.980748 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:39.014821 1771230 cri.go:89] found id: ""
	I1209 05:44:39.014900 1771230 logs.go:282] 0 containers: []
	W1209 05:44:39.014926 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:39.014946 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:39.015058 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:39.061571 1771230 cri.go:89] found id: ""
	I1209 05:44:39.061642 1771230 logs.go:282] 0 containers: []
	W1209 05:44:39.061664 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:39.061690 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:39.061806 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:39.113611 1771230 cri.go:89] found id: ""
	I1209 05:44:39.113672 1771230 logs.go:282] 0 containers: []
	W1209 05:44:39.113704 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:39.113724 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:39.113830 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:39.156115 1771230 cri.go:89] found id: ""
	I1209 05:44:39.156188 1771230 logs.go:282] 0 containers: []
	W1209 05:44:39.156211 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:39.156230 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:39.156347 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:39.195162 1771230 cri.go:89] found id: ""
	I1209 05:44:39.195243 1771230 logs.go:282] 0 containers: []
	W1209 05:44:39.195266 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:39.195285 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:39.195394 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:39.229308 1771230 cri.go:89] found id: ""
	I1209 05:44:39.229426 1771230 logs.go:282] 0 containers: []
	W1209 05:44:39.229463 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:39.229489 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:39.229514 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:39.319462 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:39.319499 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:39.337036 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:39.337065 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:39.408885 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:39.408908 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:39.408921 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:39.440940 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:39.440978 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:41.970897 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:41.981479 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:41.981559 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:42.013947 1771230 cri.go:89] found id: ""
	I1209 05:44:42.013974 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.013983 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:42.013991 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:42.014080 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:42.046379 1771230 cri.go:89] found id: ""
	I1209 05:44:42.046403 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.046412 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:42.046426 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:42.046494 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:42.075246 1771230 cri.go:89] found id: ""
	I1209 05:44:42.075277 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.075288 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:42.075305 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:42.075396 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:42.110519 1771230 cri.go:89] found id: ""
	I1209 05:44:42.110638 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.110673 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:42.110715 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:42.110844 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:42.143280 1771230 cri.go:89] found id: ""
	I1209 05:44:42.143357 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.143386 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:42.143408 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:42.143514 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:42.184439 1771230 cri.go:89] found id: ""
	I1209 05:44:42.184540 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.184556 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:42.184565 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:42.184653 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:42.219490 1771230 cri.go:89] found id: ""
	I1209 05:44:42.219515 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.219525 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:42.219533 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:42.219620 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:42.257517 1771230 cri.go:89] found id: ""
	I1209 05:44:42.257606 1771230 logs.go:282] 0 containers: []
	W1209 05:44:42.257632 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:42.257655 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:42.257694 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:42.281465 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:42.281499 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:42.372695 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:42.372775 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:42.372852 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:42.411280 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:42.411316 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:42.445120 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:42.445150 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:45.017162 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:45.056021 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:45.056111 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:45.105702 1771230 cri.go:89] found id: ""
	I1209 05:44:45.105727 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.105737 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:45.105744 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:45.105854 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:45.170186 1771230 cri.go:89] found id: ""
	I1209 05:44:45.170210 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.170219 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:45.170225 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:45.170300 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:45.203707 1771230 cri.go:89] found id: ""
	I1209 05:44:45.203806 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.203834 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:45.203870 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:45.203970 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:45.267476 1771230 cri.go:89] found id: ""
	I1209 05:44:45.267510 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.267520 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:45.267528 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:45.267598 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:45.307125 1771230 cri.go:89] found id: ""
	I1209 05:44:45.307149 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.307158 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:45.307164 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:45.307228 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:45.351975 1771230 cri.go:89] found id: ""
	I1209 05:44:45.352055 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.352079 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:45.352099 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:45.352197 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:45.383873 1771230 cri.go:89] found id: ""
	I1209 05:44:45.383941 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.383975 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:45.383996 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:45.384091 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:45.420666 1771230 cri.go:89] found id: ""
	I1209 05:44:45.420694 1771230 logs.go:282] 0 containers: []
	W1209 05:44:45.420704 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:45.420714 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:45.420726 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:45.489270 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:45.489307 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:45.506242 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:45.506276 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:45.583168 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:45.583198 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:45.583216 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:45.618752 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:45.618797 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:48.155391 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:48.166043 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:48.166121 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:48.193021 1771230 cri.go:89] found id: ""
	I1209 05:44:48.193044 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.193052 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:48.193059 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:48.193122 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:48.220490 1771230 cri.go:89] found id: ""
	I1209 05:44:48.220513 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.220521 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:48.220527 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:48.220590 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:48.246491 1771230 cri.go:89] found id: ""
	I1209 05:44:48.246514 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.246522 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:48.246529 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:48.246628 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:48.274139 1771230 cri.go:89] found id: ""
	I1209 05:44:48.274165 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.274175 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:48.274182 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:48.274245 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:48.301448 1771230 cri.go:89] found id: ""
	I1209 05:44:48.301478 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.301492 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:48.301498 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:48.301570 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:48.330804 1771230 cri.go:89] found id: ""
	I1209 05:44:48.330829 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.330839 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:48.330853 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:48.330913 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:48.364009 1771230 cri.go:89] found id: ""
	I1209 05:44:48.364036 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.364045 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:48.364051 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:48.364120 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:48.397269 1771230 cri.go:89] found id: ""
	I1209 05:44:48.397297 1771230 logs.go:282] 0 containers: []
	W1209 05:44:48.397307 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:48.397317 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:48.397329 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:48.436492 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:48.436520 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:48.507004 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:48.507045 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:48.524423 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:48.524457 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:48.589190 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:48.589258 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:48.589285 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:51.122970 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:51.133746 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:51.133819 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:51.161501 1771230 cri.go:89] found id: ""
	I1209 05:44:51.161526 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.161550 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:51.161558 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:51.161619 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:51.188630 1771230 cri.go:89] found id: ""
	I1209 05:44:51.188656 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.188666 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:51.188672 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:51.188733 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:51.213738 1771230 cri.go:89] found id: ""
	I1209 05:44:51.213764 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.213773 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:51.213781 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:51.213841 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:51.241277 1771230 cri.go:89] found id: ""
	I1209 05:44:51.241304 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.241313 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:51.241320 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:51.241379 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:51.267792 1771230 cri.go:89] found id: ""
	I1209 05:44:51.267816 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.267825 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:51.267832 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:51.267896 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:51.295188 1771230 cri.go:89] found id: ""
	I1209 05:44:51.295213 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.295223 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:51.295231 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:51.295302 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:51.333641 1771230 cri.go:89] found id: ""
	I1209 05:44:51.333667 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.333676 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:51.333682 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:51.333744 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:51.364006 1771230 cri.go:89] found id: ""
	I1209 05:44:51.364031 1771230 logs.go:282] 0 containers: []
	W1209 05:44:51.364041 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:51.364050 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:51.364060 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:51.445428 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:51.445469 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:51.462154 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:51.462181 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:51.530698 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:51.530719 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:51.530733 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:51.561682 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:51.561717 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:54.092824 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:54.103699 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:54.103776 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:54.129496 1771230 cri.go:89] found id: ""
	I1209 05:44:54.129525 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.129534 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:54.129552 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:54.129613 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:54.157468 1771230 cri.go:89] found id: ""
	I1209 05:44:54.157496 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.157505 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:54.157516 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:54.157594 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:54.185389 1771230 cri.go:89] found id: ""
	I1209 05:44:54.185413 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.185421 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:54.185428 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:54.185486 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:54.211354 1771230 cri.go:89] found id: ""
	I1209 05:44:54.211375 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.211383 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:54.211389 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:54.211447 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:54.236993 1771230 cri.go:89] found id: ""
	I1209 05:44:54.237015 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.237024 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:54.237030 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:54.237089 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:54.265849 1771230 cri.go:89] found id: ""
	I1209 05:44:54.265872 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.265880 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:54.265887 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:54.265949 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:54.294769 1771230 cri.go:89] found id: ""
	I1209 05:44:54.294836 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.294860 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:54.294879 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:54.294975 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:54.321411 1771230 cri.go:89] found id: ""
	I1209 05:44:54.321481 1771230 logs.go:282] 0 containers: []
	W1209 05:44:54.321504 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:54.321526 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:54.321576 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:54.406330 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:54.406393 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:54.406421 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:54.437686 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:54.437732 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:54.471268 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:54.471297 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:54.539991 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:54.540029 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:44:57.057699 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:44:57.068260 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:44:57.068334 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:44:57.094485 1771230 cri.go:89] found id: ""
	I1209 05:44:57.094507 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.094516 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:44:57.094523 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:44:57.094596 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:44:57.121125 1771230 cri.go:89] found id: ""
	I1209 05:44:57.121151 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.121160 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:44:57.121166 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:44:57.121225 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:44:57.150998 1771230 cri.go:89] found id: ""
	I1209 05:44:57.151029 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.151038 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:44:57.151045 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:44:57.151108 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:44:57.177691 1771230 cri.go:89] found id: ""
	I1209 05:44:57.177723 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.177732 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:44:57.177740 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:44:57.177802 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:44:57.203810 1771230 cri.go:89] found id: ""
	I1209 05:44:57.203833 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.203851 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:44:57.203859 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:44:57.203924 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:44:57.233937 1771230 cri.go:89] found id: ""
	I1209 05:44:57.233962 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.233971 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:44:57.233977 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:44:57.234041 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:44:57.261497 1771230 cri.go:89] found id: ""
	I1209 05:44:57.261530 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.261546 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:44:57.261553 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:44:57.261630 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:44:57.288450 1771230 cri.go:89] found id: ""
	I1209 05:44:57.288475 1771230 logs.go:282] 0 containers: []
	W1209 05:44:57.288485 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:44:57.288494 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:44:57.288524 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:44:57.358675 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:44:57.358747 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:44:57.358775 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:44:57.395256 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:44:57.395290 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:44:57.426059 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:44:57.426087 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:44:57.498388 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:44:57.498431 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:45:00.015076 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:45:00.094519 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:45:00.094632 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:45:00.242018 1771230 cri.go:89] found id: ""
	I1209 05:45:00.242119 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.242145 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:45:00.242182 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:45:00.242361 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:45:00.364808 1771230 cri.go:89] found id: ""
	I1209 05:45:00.364904 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.364929 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:45:00.364974 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:45:00.365101 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:45:00.592304 1771230 cri.go:89] found id: ""
	I1209 05:45:00.592395 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.592420 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:45:00.592441 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:45:00.592576 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:45:00.675707 1771230 cri.go:89] found id: ""
	I1209 05:45:00.675826 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.675851 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:45:00.675876 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:45:00.676004 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:45:00.750401 1771230 cri.go:89] found id: ""
	I1209 05:45:00.750491 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.750517 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:45:00.750537 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:45:00.750674 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:45:00.782187 1771230 cri.go:89] found id: ""
	I1209 05:45:00.782214 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.782222 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:45:00.782232 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:45:00.782300 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:45:00.825014 1771230 cri.go:89] found id: ""
	I1209 05:45:00.825036 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.825045 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:45:00.825052 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:45:00.825116 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:45:00.865301 1771230 cri.go:89] found id: ""
	I1209 05:45:00.865324 1771230 logs.go:282] 0 containers: []
	W1209 05:45:00.865332 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:45:00.865342 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:45:00.865356 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:45:00.945978 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:45:00.946056 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:45:00.965855 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:45:00.965885 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:45:01.059072 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:45:01.059145 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:45:01.059173 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:45:01.095197 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:45:01.095247 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:45:03.630394 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:45:03.640902 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:45:03.640978 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:45:03.668111 1771230 cri.go:89] found id: ""
	I1209 05:45:03.668136 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.668146 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:45:03.668152 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:45:03.668214 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:45:03.694540 1771230 cri.go:89] found id: ""
	I1209 05:45:03.694566 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.694619 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:45:03.694626 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:45:03.694688 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:45:03.720724 1771230 cri.go:89] found id: ""
	I1209 05:45:03.720749 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.720757 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:45:03.720764 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:45:03.720856 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:45:03.748031 1771230 cri.go:89] found id: ""
	I1209 05:45:03.748107 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.748124 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:45:03.748131 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:45:03.748193 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:45:03.774316 1771230 cri.go:89] found id: ""
	I1209 05:45:03.774343 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.774352 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:45:03.774359 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:45:03.774422 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:45:03.801132 1771230 cri.go:89] found id: ""
	I1209 05:45:03.801163 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.801172 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:45:03.801179 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:45:03.801236 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:45:03.827592 1771230 cri.go:89] found id: ""
	I1209 05:45:03.827616 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.827625 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:45:03.827631 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:45:03.827691 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:45:03.857849 1771230 cri.go:89] found id: ""
	I1209 05:45:03.857924 1771230 logs.go:282] 0 containers: []
	W1209 05:45:03.857951 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:45:03.857974 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:45:03.858015 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:45:03.932345 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:45:03.932382 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:45:03.949533 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:45:03.949580 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:45:04.033979 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:45:04.034049 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:45:04.034073 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:45:04.067798 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:45:04.067844 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:45:06.600126 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:45:06.611105 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:45:06.611184 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:45:06.641477 1771230 cri.go:89] found id: ""
	I1209 05:45:06.641501 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.641510 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:45:06.641517 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:45:06.641607 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:45:06.667693 1771230 cri.go:89] found id: ""
	I1209 05:45:06.667770 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.667787 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:45:06.667794 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:45:06.667859 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:45:06.702143 1771230 cri.go:89] found id: ""
	I1209 05:45:06.702167 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.702176 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:45:06.702182 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:45:06.702241 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:45:06.729691 1771230 cri.go:89] found id: ""
	I1209 05:45:06.729719 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.729727 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:45:06.729734 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:45:06.729798 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:45:06.760358 1771230 cri.go:89] found id: ""
	I1209 05:45:06.760381 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.760390 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:45:06.760396 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:45:06.760456 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:45:06.790958 1771230 cri.go:89] found id: ""
	I1209 05:45:06.791035 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.791051 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:45:06.791058 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:45:06.791119 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:45:06.816140 1771230 cri.go:89] found id: ""
	I1209 05:45:06.816204 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.816218 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:45:06.816225 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:45:06.816284 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:45:06.842831 1771230 cri.go:89] found id: ""
	I1209 05:45:06.842906 1771230 logs.go:282] 0 containers: []
	W1209 05:45:06.842929 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:45:06.842950 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:45:06.842988 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:45:06.873038 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:45:06.873068 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:45:06.941826 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:45:06.941864 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:45:06.959031 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:45:06.959060 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:45:07.031308 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:45:07.031326 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:45:07.031340 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:45:09.563634 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:45:09.574828 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:45:09.574898 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:45:09.612974 1771230 cri.go:89] found id: ""
	I1209 05:45:09.613003 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.613014 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:45:09.613020 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:45:09.613088 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:45:09.650001 1771230 cri.go:89] found id: ""
	I1209 05:45:09.650027 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.650036 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:45:09.650045 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:45:09.650104 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:45:09.676326 1771230 cri.go:89] found id: ""
	I1209 05:45:09.676350 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.676365 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:45:09.676371 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:45:09.676436 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:45:09.706971 1771230 cri.go:89] found id: ""
	I1209 05:45:09.706996 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.707006 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:45:09.707013 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:45:09.707073 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:45:09.738072 1771230 cri.go:89] found id: ""
	I1209 05:45:09.738096 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.738105 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:45:09.738112 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:45:09.738180 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:45:09.764200 1771230 cri.go:89] found id: ""
	I1209 05:45:09.764224 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.764233 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:45:09.764240 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:45:09.764304 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:45:09.791088 1771230 cri.go:89] found id: ""
	I1209 05:45:09.791114 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.791123 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:45:09.791129 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:45:09.791189 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:45:09.817398 1771230 cri.go:89] found id: ""
	I1209 05:45:09.817422 1771230 logs.go:282] 0 containers: []
	W1209 05:45:09.817431 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:45:09.817440 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:45:09.817451 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:45:09.849327 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:45:09.849364 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:45:09.886184 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:45:09.886216 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:45:09.954329 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:45:09.954366 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:45:09.970924 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:45:09.970953 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:45:10.043677 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:45:12.544569 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:45:12.555490 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:45:12.555571 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:45:12.588247 1771230 cri.go:89] found id: ""
	I1209 05:45:12.588271 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.588281 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:45:12.588288 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:45:12.588354 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:45:12.626885 1771230 cri.go:89] found id: ""
	I1209 05:45:12.626913 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.626922 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:45:12.626929 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:45:12.626992 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:45:12.656158 1771230 cri.go:89] found id: ""
	I1209 05:45:12.656186 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.656197 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:45:12.656206 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:45:12.656274 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:45:12.683073 1771230 cri.go:89] found id: ""
	I1209 05:45:12.683099 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.683108 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:45:12.683115 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:45:12.683176 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:45:12.710471 1771230 cri.go:89] found id: ""
	I1209 05:45:12.710500 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.710510 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:45:12.710517 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:45:12.710592 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:45:12.743880 1771230 cri.go:89] found id: ""
	I1209 05:45:12.743908 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.743917 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:45:12.743924 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:45:12.743984 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:45:12.770393 1771230 cri.go:89] found id: ""
	I1209 05:45:12.770419 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.770427 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:45:12.770434 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:45:12.770494 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:45:12.797773 1771230 cri.go:89] found id: ""
	I1209 05:45:12.797801 1771230 logs.go:282] 0 containers: []
	W1209 05:45:12.797811 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:45:12.797823 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:45:12.797835 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:45:12.866664 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:45:12.866704 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:45:12.884062 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:45:12.884090 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:45:12.946030 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:45:12.946053 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:45:12.946068 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:45:12.977693 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:45:12.977727 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:45:15.508113 1771230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:45:15.518607 1771230 kubeadm.go:602] duration metric: took 4m3.113068748s to restartPrimaryControlPlane
	W1209 05:45:15.518680 1771230 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 05:45:15.518750 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 05:45:16.016204 1771230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:45:16.030172 1771230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:45:16.039017 1771230 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:45:16.039092 1771230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:45:16.047663 1771230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:45:16.047725 1771230 kubeadm.go:158] found existing configuration files:
	
	I1209 05:45:16.047786 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:45:16.056361 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:45:16.056458 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:45:16.064561 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:45:16.073029 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:45:16.073092 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:45:16.080821 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:45:16.089134 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:45:16.089205 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:45:16.097306 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:45:16.105745 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:45:16.105812 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:45:16.113784 1771230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:45:16.158801 1771230 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:45:16.159001 1771230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:45:16.232547 1771230 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:45:16.232645 1771230 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:45:16.232694 1771230 kubeadm.go:319] OS: Linux
	I1209 05:45:16.232744 1771230 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:45:16.232801 1771230 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:45:16.232857 1771230 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:45:16.232910 1771230 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:45:16.232962 1771230 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:45:16.233013 1771230 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:45:16.233061 1771230 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:45:16.233113 1771230 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:45:16.233163 1771230 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:45:16.305250 1771230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:45:16.305381 1771230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:45:16.305484 1771230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:45:16.320223 1771230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:45:16.326396 1771230 out.go:252]   - Generating certificates and keys ...
	I1209 05:45:16.326505 1771230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:45:16.326630 1771230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:45:16.326724 1771230 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:45:16.326797 1771230 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:45:16.326871 1771230 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:45:16.326926 1771230 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:45:16.326988 1771230 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:45:16.327049 1771230 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:45:16.327123 1771230 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:45:16.327195 1771230 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:45:16.327233 1771230 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:45:16.327289 1771230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:45:16.586837 1771230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:45:16.884199 1771230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:45:17.131897 1771230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:45:17.205691 1771230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:45:17.409020 1771230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:45:17.410276 1771230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:45:17.422814 1771230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:45:17.426162 1771230 out.go:252]   - Booting up control plane ...
	I1209 05:45:17.426273 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:45:17.426364 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:45:17.430971 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:45:17.448706 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:45:17.448824 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:45:17.462704 1771230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:45:17.462805 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:45:17.462845 1771230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:45:17.647027 1771230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:45:17.647148 1771230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:49:17.648210 1771230 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001282121s
	I1209 05:49:17.648250 1771230 kubeadm.go:319] 
	I1209 05:49:17.648309 1771230 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:49:17.648347 1771230 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:49:17.648455 1771230 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:49:17.648465 1771230 kubeadm.go:319] 
	I1209 05:49:17.648570 1771230 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:49:17.648605 1771230 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:49:17.648639 1771230 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:49:17.648648 1771230 kubeadm.go:319] 
	I1209 05:49:17.652355 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:49:17.652796 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:49:17.652911 1771230 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:49:17.653153 1771230 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:49:17.653163 1771230 kubeadm.go:319] 
	I1209 05:49:17.653233 1771230 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 05:49:17.653337 1771230 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001282121s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001282121s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 05:49:17.653423 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 05:49:18.095266 1771230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:49:18.110093 1771230 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:49:18.110162 1771230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:49:18.121779 1771230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:49:18.121855 1771230 kubeadm.go:158] found existing configuration files:
	
	I1209 05:49:18.121935 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:49:18.131907 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:49:18.132061 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:49:18.140910 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:49:18.150684 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:49:18.150756 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:49:18.160144 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:49:18.170670 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:49:18.170732 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:49:18.179927 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:49:18.190022 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:49:18.190139 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:49:18.199283 1771230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:49:18.280223 1771230 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:49:18.282715 1771230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:49:18.416764 1771230 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:49:18.416913 1771230 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:49:18.416994 1771230 kubeadm.go:319] OS: Linux
	I1209 05:49:18.417078 1771230 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:49:18.417163 1771230 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:49:18.417248 1771230 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:49:18.417331 1771230 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:49:18.417409 1771230 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:49:18.417488 1771230 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:49:18.417579 1771230 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:49:18.417658 1771230 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:49:18.417734 1771230 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:49:18.503349 1771230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:49:18.503554 1771230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:49:18.503697 1771230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:49:18.524434 1771230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:49:18.530064 1771230 out.go:252]   - Generating certificates and keys ...
	I1209 05:49:18.530229 1771230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:49:18.530356 1771230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:49:18.530465 1771230 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:49:18.530547 1771230 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:49:18.530661 1771230 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:49:18.530739 1771230 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:49:18.530825 1771230 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:49:18.530920 1771230 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:49:18.531024 1771230 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:49:18.531150 1771230 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:49:18.531555 1771230 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:49:18.531855 1771230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:49:18.884690 1771230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:49:18.976514 1771230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:49:19.698010 1771230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:49:20.030584 1771230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:49:20.539910 1771230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:49:20.542725 1771230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:49:20.545873 1771230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:49:20.549537 1771230 out.go:252]   - Booting up control plane ...
	I1209 05:49:20.549698 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:49:20.549831 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:49:20.549928 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:49:20.564258 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:49:20.564746 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:49:20.574152 1771230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:49:20.574951 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:49:20.575174 1771230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:49:20.712152 1771230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:49:20.712299 1771230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:53:20.713448 1771230 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001358954s
	I1209 05:53:20.713486 1771230 kubeadm.go:319] 
	I1209 05:53:20.713551 1771230 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:53:20.713586 1771230 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:53:20.713721 1771230 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:53:20.713742 1771230 kubeadm.go:319] 
	I1209 05:53:20.713842 1771230 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:53:20.713873 1771230 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:53:20.713903 1771230 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:53:20.713908 1771230 kubeadm.go:319] 
	I1209 05:53:20.718108 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:53:20.718533 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:53:20.718659 1771230 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:53:20.718904 1771230 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:53:20.718911 1771230 kubeadm.go:319] 
	I1209 05:53:20.718980 1771230 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:53:20.719035 1771230 kubeadm.go:403] duration metric: took 12m8.360561884s to StartCluster
	I1209 05:53:20.719083 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:20.719142 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:20.759611 1771230 cri.go:89] found id: ""
	I1209 05:53:20.759633 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.759641 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:20.759647 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:20.759709 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:20.790036 1771230 cri.go:89] found id: ""
	I1209 05:53:20.790058 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.790067 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:53:20.790073 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:20.790133 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:20.841780 1771230 cri.go:89] found id: ""
	I1209 05:53:20.841803 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.841812 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:53:20.841817 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:20.841876 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:20.909264 1771230 cri.go:89] found id: ""
	I1209 05:53:20.909286 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.909295 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:20.909301 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:20.909362 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:20.945721 1771230 cri.go:89] found id: ""
	I1209 05:53:20.945744 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.945752 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:20.945758 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:20.945818 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:20.991021 1771230 cri.go:89] found id: ""
	I1209 05:53:20.991043 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.991051 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:20.991059 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:20.991117 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:21.037379 1771230 cri.go:89] found id: ""
	I1209 05:53:21.037399 1771230 logs.go:282] 0 containers: []
	W1209 05:53:21.037407 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:21.037413 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:21.037473 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:21.068751 1771230 cri.go:89] found id: ""
	I1209 05:53:21.068772 1771230 logs.go:282] 0 containers: []
	W1209 05:53:21.068781 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:21.068790 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:21.068804 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:21.089143 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:21.089212 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:21.183462 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:21.183480 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:21.183492 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:21.220772 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:53:21.220846 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:21.257659 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:21.257726 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 05:53:21.331748 1771230 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:53:21.331853 1771230 out.go:285] * 
	* 
	W1209 05:53:21.332055 1771230 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:53:21.332108 1771230 out.go:285] * 
	* 
	W1209 05:53:21.334360 1771230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:53:21.340405 1771230 out.go:203] 
	W1209 05:53:21.344443 1771230 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:53:21.344726 1771230 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:53:21.344801 1771230 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:53:21.349870 1771230 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-054206 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-054206 version --output=json: exit status 1 (83.084582ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-09 05:53:22.054842919 +0000 UTC m=+5853.944651017
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-054206
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-054206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581",
	        "Created": "2025-12-09T05:40:30.367675378Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1771358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:41:01.03371624Z",
	            "FinishedAt": "2025-12-09T05:40:59.993788656Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581/hostname",
	        "HostsPath": "/var/lib/docker/containers/1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581/hosts",
	        "LogPath": "/var/lib/docker/containers/1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581/1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581-json.log",
	        "Name": "/kubernetes-upgrade-054206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-054206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-054206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1aaced55eb8effd75d8de5f078a8398fd6716a589860c5c38e93b81aa067b581",
	                "LowerDir": "/var/lib/docker/overlay2/de9e3118034f44a2b9e1694001548245c877c6663b92c8bcf86ebb11463fba73-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de9e3118034f44a2b9e1694001548245c877c6663b92c8bcf86ebb11463fba73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de9e3118034f44a2b9e1694001548245c877c6663b92c8bcf86ebb11463fba73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de9e3118034f44a2b9e1694001548245c877c6663b92c8bcf86ebb11463fba73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-054206",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-054206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-054206",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-054206",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-054206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4bb491ba3a55ec588374de0f2228897e6607499c019d6c3b77d97d6d7b4aedd",
	            "SandboxKey": "/var/run/docker/netns/f4bb491ba3a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-054206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:55:fb:bf:2b:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f45c1c226c408a54929f37aa1c075612b7042a4be6c776c5f3c1396a851d966",
	                    "EndpointID": "aae0a417072a3c9029ddaecaf7c10be5ec83091b8d5a66656d58fc5cf2382275",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-054206",
	                        "1aaced55eb8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-054206 -n kubernetes-upgrade-054206
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-054206 -n kubernetes-upgrade-054206: exit status 2 (400.352359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-054206 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-054206 logs -n 25: (1.816425565s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-880308 sudo crio config                                                                                                                                                                                         │ cilium-880308             │ jenkins │ v1.37.0 │ 09 Dec 25 05:38 UTC │                     │
	│ delete  │ -p cilium-880308                                                                                                                                                                                                          │ cilium-880308             │ jenkins │ v1.37.0 │ 09 Dec 25 05:38 UTC │ 09 Dec 25 05:38 UTC │
	│ start   │ -p force-systemd-env-772419 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-772419  │ jenkins │ v1.37.0 │ 09 Dec 25 05:38 UTC │ 09 Dec 25 05:38 UTC │
	│ delete  │ -p force-systemd-env-772419                                                                                                                                                                                               │ force-systemd-env-772419  │ jenkins │ v1.37.0 │ 09 Dec 25 05:38 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p force-systemd-flag-530604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-530604 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ ssh     │ -p NoKubernetes-832858 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-832858       │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │                     │
	│ delete  │ -p NoKubernetes-832858                                                                                                                                                                                                    │ NoKubernetes-832858       │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p cert-expiration-659753 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-659753    │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ ssh     │ force-systemd-flag-530604 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-530604 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ delete  │ -p force-systemd-flag-530604                                                                                                                                                                                              │ force-systemd-flag-530604 │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:39 UTC │
	│ start   │ -p cert-options-037055 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-037055       │ jenkins │ v1.37.0 │ 09 Dec 25 05:39 UTC │ 09 Dec 25 05:40 UTC │
	│ ssh     │ cert-options-037055 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-037055       │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ ssh     │ -p cert-options-037055 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-037055       │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ delete  │ -p cert-options-037055                                                                                                                                                                                                    │ cert-options-037055       │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ start   │ -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-054206 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:40 UTC │
	│ stop    │ -p kubernetes-upgrade-054206                                                                                                                                                                                              │ kubernetes-upgrade-054206 │ jenkins │ v1.37.0 │ 09 Dec 25 05:40 UTC │ 09 Dec 25 05:41 UTC │
	│ start   │ -p kubernetes-upgrade-054206 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                           │ kubernetes-upgrade-054206 │ jenkins │ v1.37.0 │ 09 Dec 25 05:41 UTC │                     │
	│ start   │ -p cert-expiration-659753 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                 │ cert-expiration-659753    │ jenkins │ v1.37.0 │ 09 Dec 25 05:42 UTC │ 09 Dec 25 05:43 UTC │
	│ delete  │ -p cert-expiration-659753                                                                                                                                                                                                 │ cert-expiration-659753    │ jenkins │ v1.37.0 │ 09 Dec 25 05:43 UTC │ 09 Dec 25 05:43 UTC │
	│ start   │ -p stopped-upgrade-056039 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-056039    │ jenkins │ v1.35.0 │ 09 Dec 25 05:43 UTC │ 09 Dec 25 05:44 UTC │
	│ stop    │ stopped-upgrade-056039 stop                                                                                                                                                                                               │ stopped-upgrade-056039    │ jenkins │ v1.35.0 │ 09 Dec 25 05:44 UTC │ 09 Dec 25 05:44 UTC │
	│ start   │ -p stopped-upgrade-056039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-056039    │ jenkins │ v1.37.0 │ 09 Dec 25 05:44 UTC │ 09 Dec 25 05:48 UTC │
	│ delete  │ -p stopped-upgrade-056039                                                                                                                                                                                                 │ stopped-upgrade-056039    │ jenkins │ v1.37.0 │ 09 Dec 25 05:48 UTC │ 09 Dec 25 05:48 UTC │
	│ start   │ -p running-upgrade-831739 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-831739    │ jenkins │ v1.35.0 │ 09 Dec 25 05:48 UTC │ 09 Dec 25 05:49 UTC │
	│ start   │ -p running-upgrade-831739 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-831739    │ jenkins │ v1.37.0 │ 09 Dec 25 05:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:49:07
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:49:07.583435 1795150 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:49:07.583656 1795150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:49:07.583685 1795150 out.go:374] Setting ErrFile to fd 2...
	I1209 05:49:07.583705 1795150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:49:07.584002 1795150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:49:07.584415 1795150 out.go:368] Setting JSON to false
	I1209 05:49:07.585413 1795150 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":37888,"bootTime":1765221460,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 05:49:07.585518 1795150 start.go:143] virtualization:  
	I1209 05:49:07.589195 1795150 out.go:179] * [running-upgrade-831739] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:49:07.592087 1795150 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:49:07.592179 1795150 notify.go:221] Checking for updates...
	I1209 05:49:07.597911 1795150 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:49:07.600824 1795150 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:49:07.603801 1795150 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 05:49:07.606867 1795150 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:49:07.611678 1795150 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:49:07.616799 1795150 config.go:182] Loaded profile config "running-upgrade-831739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 05:49:07.620368 1795150 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1209 05:49:07.623242 1795150 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:49:07.648792 1795150 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:49:07.648914 1795150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:49:07.716289 1795150 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:49:07.701097766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:49:07.716394 1795150 docker.go:319] overlay module found
	I1209 05:49:07.719575 1795150 out.go:179] * Using the docker driver based on existing profile
	I1209 05:49:07.722416 1795150 start.go:309] selected driver: docker
	I1209 05:49:07.722436 1795150 start.go:927] validating driver "docker" against &{Name:running-upgrade-831739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-831739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:49:07.722540 1795150 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:49:07.723352 1795150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:49:07.796455 1795150 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:49:07.786894812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:49:07.796790 1795150 cni.go:84] Creating CNI manager for ""
	I1209 05:49:07.796860 1795150 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 05:49:07.796903 1795150 start.go:353] cluster config:
	{Name:running-upgrade-831739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-831739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:49:07.800002 1795150 out.go:179] * Starting "running-upgrade-831739" primary control-plane node in "running-upgrade-831739" cluster
	I1209 05:49:07.802697 1795150 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 05:49:07.805725 1795150 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:49:07.808555 1795150 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1209 05:49:07.808670 1795150 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1209 05:49:07.808897 1795150 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I1209 05:49:07.808910 1795150 cache.go:65] Caching tarball of preloaded images
	I1209 05:49:07.809019 1795150 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 05:49:07.809029 1795150 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1209 05:49:07.809130 1795150 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/config.json ...
	I1209 05:49:07.828770 1795150 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1209 05:49:07.828793 1795150 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1209 05:49:07.828808 1795150 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:49:07.828840 1795150 start.go:360] acquireMachinesLock for running-upgrade-831739: {Name:mk311269f04a7b1186dc5883cfec72e265423054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:49:07.828898 1795150 start.go:364] duration metric: took 35.635µs to acquireMachinesLock for "running-upgrade-831739"
	I1209 05:49:07.828922 1795150 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:49:07.828929 1795150 fix.go:54] fixHost starting: 
	I1209 05:49:07.829215 1795150 cli_runner.go:164] Run: docker container inspect running-upgrade-831739 --format={{.State.Status}}
	I1209 05:49:07.846325 1795150 fix.go:112] recreateIfNeeded on running-upgrade-831739: state=Running err=<nil>
	W1209 05:49:07.846353 1795150 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:49:07.849656 1795150 out.go:252] * Updating the running docker "running-upgrade-831739" container ...
	I1209 05:49:07.849726 1795150 machine.go:94] provisionDockerMachine start ...
	I1209 05:49:07.849826 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:07.870558 1795150 main.go:143] libmachine: Using SSH client type: native
	I1209 05:49:07.870931 1795150 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34516 <nil> <nil>}
	I1209 05:49:07.870948 1795150 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:49:08.007566 1795150 main.go:143] libmachine: SSH cmd err, output: <nil>: running-upgrade-831739
	
	I1209 05:49:08.007593 1795150 ubuntu.go:182] provisioning hostname "running-upgrade-831739"
	I1209 05:49:08.007683 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:08.029838 1795150 main.go:143] libmachine: Using SSH client type: native
	I1209 05:49:08.030163 1795150 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34516 <nil> <nil>}
	I1209 05:49:08.030181 1795150 main.go:143] libmachine: About to run SSH command:
	sudo hostname running-upgrade-831739 && echo "running-upgrade-831739" | sudo tee /etc/hostname
	I1209 05:49:08.184284 1795150 main.go:143] libmachine: SSH cmd err, output: <nil>: running-upgrade-831739
	
	I1209 05:49:08.184367 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:08.203008 1795150 main.go:143] libmachine: Using SSH client type: native
	I1209 05:49:08.203334 1795150 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34516 <nil> <nil>}
	I1209 05:49:08.203356 1795150 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-831739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-831739/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-831739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:49:08.326961 1795150 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:49:08.326995 1795150 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 05:49:08.327020 1795150 ubuntu.go:190] setting up certificates
	I1209 05:49:08.327038 1795150 provision.go:84] configureAuth start
	I1209 05:49:08.327105 1795150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-831739
	I1209 05:49:08.345349 1795150 provision.go:143] copyHostCerts
	I1209 05:49:08.345426 1795150 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 05:49:08.345443 1795150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:49:08.345522 1795150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 05:49:08.345642 1795150 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 05:49:08.345655 1795150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:49:08.345688 1795150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 05:49:08.345758 1795150 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 05:49:08.345769 1795150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:49:08.345796 1795150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 05:49:08.345857 1795150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-831739 san=[127.0.0.1 192.168.76.2 localhost minikube running-upgrade-831739]
	I1209 05:49:08.953602 1795150 provision.go:177] copyRemoteCerts
	I1209 05:49:08.953668 1795150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:49:08.953709 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:08.972373 1795150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34516 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/running-upgrade-831739/id_rsa Username:docker}
	I1209 05:49:09.084835 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1209 05:49:09.115540 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 05:49:09.140996 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:49:09.175097 1795150 provision.go:87] duration metric: took 848.033708ms to configureAuth
	I1209 05:49:09.175122 1795150 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:49:09.175310 1795150 config.go:182] Loaded profile config "running-upgrade-831739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1209 05:49:09.175417 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:09.196533 1795150 main.go:143] libmachine: Using SSH client type: native
	I1209 05:49:09.196834 1795150 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34516 <nil> <nil>}
	I1209 05:49:09.196854 1795150 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 05:49:09.648856 1795150 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 05:49:09.648925 1795150 machine.go:97] duration metric: took 1.799183556s to provisionDockerMachine
	I1209 05:49:09.648951 1795150 start.go:293] postStartSetup for "running-upgrade-831739" (driver="docker")
	I1209 05:49:09.648977 1795150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:49:09.649088 1795150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:49:09.649173 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:09.666359 1795150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34516 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/running-upgrade-831739/id_rsa Username:docker}
	I1209 05:49:09.759621 1795150 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:49:09.762758 1795150 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:49:09.762791 1795150 main.go:143] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 05:49:09.762824 1795150 main.go:143] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 05:49:09.762839 1795150 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 05:49:09.762850 1795150 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 05:49:09.762910 1795150 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 05:49:09.762997 1795150 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 05:49:09.763105 1795150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:49:09.771552 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:49:09.797084 1795150 start.go:296] duration metric: took 148.101716ms for postStartSetup
	I1209 05:49:09.797207 1795150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:49:09.797270 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:09.813712 1795150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34516 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/running-upgrade-831739/id_rsa Username:docker}
	I1209 05:49:09.904375 1795150 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:49:09.909367 1795150 fix.go:56] duration metric: took 2.080430676s for fixHost
	I1209 05:49:09.909405 1795150 start.go:83] releasing machines lock for "running-upgrade-831739", held for 2.080493791s
	I1209 05:49:09.909504 1795150 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-831739
	I1209 05:49:09.926240 1795150 ssh_runner.go:195] Run: cat /version.json
	I1209 05:49:09.926297 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:09.926652 1795150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:49:09.926703 1795150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-831739
	I1209 05:49:09.960142 1795150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34516 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/running-upgrade-831739/id_rsa Username:docker}
	I1209 05:49:09.965497 1795150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34516 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/running-upgrade-831739/id_rsa Username:docker}
	W1209 05:49:10.319885 1795150 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.35.0 -> Actual minikube version: v1.37.0
	I1209 05:49:10.320235 1795150 ssh_runner.go:195] Run: systemctl --version
	I1209 05:49:10.328790 1795150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 05:49:10.512994 1795150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 05:49:10.528381 1795150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:49:10.555172 1795150 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1209 05:49:10.555261 1795150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:49:10.572891 1795150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:49:10.572957 1795150 start.go:496] detecting cgroup driver to use...
	I1209 05:49:10.573005 1795150 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:49:10.573096 1795150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 05:49:10.601377 1795150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 05:49:10.619980 1795150 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:49:10.620079 1795150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:49:10.635717 1795150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:49:10.655972 1795150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:49:10.822735 1795150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:49:10.977231 1795150 docker.go:234] disabling docker service ...
	I1209 05:49:10.977349 1795150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:49:11.012633 1795150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:49:11.037386 1795150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:49:11.227492 1795150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:49:11.364823 1795150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:49:11.377343 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:49:11.408931 1795150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 05:49:11.409050 1795150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.437412 1795150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 05:49:11.437524 1795150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.465416 1795150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.491789 1795150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.525544 1795150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:49:11.556720 1795150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.589674 1795150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.613130 1795150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:49:11.649716 1795150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:49:11.670496 1795150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:49:11.692648 1795150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:49:11.910715 1795150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 05:49:12.149294 1795150 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 05:49:12.149385 1795150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 05:49:12.153314 1795150 start.go:564] Will wait 60s for crictl version
	I1209 05:49:12.153418 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:49:12.157395 1795150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 05:49:12.210344 1795150 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1209 05:49:12.210442 1795150 ssh_runner.go:195] Run: crio --version
	I1209 05:49:12.251206 1795150 ssh_runner.go:195] Run: crio --version
	I1209 05:49:12.297759 1795150 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.24.6 ...
	I1209 05:49:12.300614 1795150 cli_runner.go:164] Run: docker network inspect running-upgrade-831739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:49:12.317618 1795150 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1209 05:49:12.321647 1795150 kubeadm.go:884] updating cluster {Name:running-upgrade-831739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-831739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:49:12.321780 1795150 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1209 05:49:12.321851 1795150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:49:12.366814 1795150 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 05:49:12.366839 1795150 crio.go:433] Images already preloaded, skipping extraction
	I1209 05:49:12.366904 1795150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:49:12.405600 1795150 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 05:49:12.405623 1795150 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:49:12.405632 1795150 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1209 05:49:12.405733 1795150 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=running-upgrade-831739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-831739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:49:12.405824 1795150 ssh_runner.go:195] Run: crio config
	I1209 05:49:12.456360 1795150 cni.go:84] Creating CNI manager for ""
	I1209 05:49:12.456440 1795150 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 05:49:12.456471 1795150 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:49:12.456496 1795150 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-831739 NodeName:running-upgrade-831739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:49:12.456658 1795150 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-831739"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:49:12.456756 1795150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1209 05:49:12.467857 1795150 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:49:12.467975 1795150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:49:12.478728 1795150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1209 05:49:12.497438 1795150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 05:49:12.516303 1795150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1209 05:49:12.535160 1795150 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:49:12.538804 1795150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:49:12.671857 1795150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:49:12.684122 1795150 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739 for IP: 192.168.76.2
	I1209 05:49:12.684155 1795150 certs.go:195] generating shared ca certs ...
	I1209 05:49:12.684180 1795150 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:49:12.684434 1795150 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 05:49:12.684494 1795150 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 05:49:12.684506 1795150 certs.go:257] generating profile certs ...
	I1209 05:49:12.684596 1795150 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/client.key
	I1209 05:49:12.684692 1795150 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/apiserver.key.0357d6b8
	I1209 05:49:12.684738 1795150 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/proxy-client.key
	I1209 05:49:12.684859 1795150 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 05:49:12.684897 1795150 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 05:49:12.684910 1795150 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:49:12.684938 1795150 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:49:12.684967 1795150 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:49:12.684999 1795150 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 05:49:12.685049 1795150 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:49:12.685663 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:49:12.711750 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 05:49:12.736679 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:49:12.761497 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 05:49:12.785630 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 05:49:12.810620 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 05:49:12.835491 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:49:12.860356 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 05:49:12.886352 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:49:12.911129 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 05:49:12.936190 1795150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 05:49:12.960973 1795150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:49:12.982724 1795150 ssh_runner.go:195] Run: openssl version
	I1209 05:49:12.988835 1795150 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:49:12.999517 1795150 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:49:13.012813 1795150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:49:13.018180 1795150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:49:13.018259 1795150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:49:13.026383 1795150 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:49:13.039992 1795150 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 05:49:13.049816 1795150 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 05:49:13.061658 1795150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 05:49:13.066122 1795150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 05:49:13.066220 1795150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 05:49:13.074435 1795150 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:49:13.089798 1795150 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 05:49:13.098475 1795150 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 05:49:13.107631 1795150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 05:49:13.111279 1795150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 05:49:13.111346 1795150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 05:49:13.118355 1795150 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:49:13.126818 1795150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:49:13.130436 1795150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:49:13.137272 1795150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:49:13.144358 1795150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:49:13.151387 1795150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:49:13.158640 1795150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:49:13.165615 1795150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:49:13.172442 1795150 kubeadm.go:401] StartCluster: {Name:running-upgrade-831739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-831739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:49:13.172527 1795150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 05:49:13.172587 1795150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:49:13.219939 1795150 cri.go:89] found id: "84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298"
	I1209 05:49:13.219960 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:49:13.219965 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:49:13.219969 1795150 cri.go:89] found id: "901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a"
	I1209 05:49:13.219972 1795150 cri.go:89] found id: ""
	I1209 05:49:13.220027 1795150 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 05:49:13.239945 1795150 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a/userdata","rootfs":"/var/lib/containers/storage/overlay/d23ce24d14cb054540f28fe13153a0451937b20259837bd13c37ad8bcbd45df3/merged","created":"2025-12-09T05:49:10.12808269Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8c4b12d6","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8c4b12d6\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termination
MessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-09T05:49:10.027092201Z","io.kubernetes.cri-o.Image":"c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.32.0","io.kubernetes.cri-o.ImageRef":"c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-831739\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"15d204088fdafc1c981e0179ad1c41be\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-831739_15d204088fdafc1c981e0179ad1c41be/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-sch
eduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d23ce24d14cb054540f28fe13153a0451937b20259837bd13c37ad8bcbd45df3/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-831739_kube-system_15d204088fdafc1c981e0179ad1c41be_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4083055e0037cf484f77ee64b199220b48f4de90573ed4f89df74b290814d0c9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4083055e0037cf484f77ee64b199220b48f4de90573ed4f89df74b290814d0c9","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-831739_kube-system_15d204088fdafc1c981e0179ad1c41be_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/15d204088fdafc1c981e0179ad1c41be/etc-hosts\",\"readonly\":false,\"propagatio
n\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/15d204088fdafc1c981e0179ad1c41be/containers/kube-scheduler/7e514721\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-831739","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"15d204088fdafc1c981e0179ad1c41be","kubernetes.io/config.hash":"15d204088fdafc1c981e0179ad1c41be","kubernetes.io/config.seen":"2025-12-09T05:48:57.778801148Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6f55c141f1e1778b54eb2e3ac2ffee091
a228540cd44a992ff7345d0d55f462e/userdata","rootfs":"/var/lib/containers/storage/overlay/7d9e698e0aaa0fd307c8f60af41880af29a80e3afc94ef98d5677d644f9e25f8/merged","created":"2025-12-09T05:49:10.135466988Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e68be80f","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e68be80f\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-09T05:49:10.
044452903Z","io.kubernetes.cri-o.Image":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.16-0","io.kubernetes.cri-o.ImageRef":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-831739\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3965b4ce1f5743ad08f197acd89bd9f9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-831739_3965b4ce1f5743ad08f197acd89bd9f9/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d9e698e0aaa0fd307c8f60af41880af29a80e3afc94ef98d5677d644f9e25f8/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-831739_kube-system_3965b4ce1f5743ad08f197acd89bd9f9_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage
/overlay-containers/d8dd8849f37a278893e9a196249f92b0bbc4194a50feabeacef055ed85282f3d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d8dd8849f37a278893e9a196249f92b0bbc4194a50feabeacef055ed85282f3d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-831739_kube-system_3965b4ce1f5743ad08f197acd89bd9f9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3965b4ce1f5743ad08f197acd89bd9f9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3965b4ce1f5743ad08f197acd89bd9f9/containers/etcd/ebbb574f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"s
elinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-831739","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3965b4ce1f5743ad08f197acd89bd9f9","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"3965b4ce1f5743ad08f197acd89bd9f9","kubernetes.io/config.seen":"2025-12-09T05:48:57.778790506Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298/userdata","rootfs":"/var/lib/containers/storage/overlay/042c49e25f71fec39896cded838a61c096e65a885c003a7573f15622dc870401/merged"
,"created":"2025-12-09T05:49:10.137097938Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf915d6a","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf915d6a\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-09T05:49:10.068637971Z","io.kubernetes.cri-o.Image":"2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","io.kubernetes.cri-o.ImageName":"registry.k
8s.io/kube-apiserver:v1.32.0","io.kubernetes.cri-o.ImageRef":"2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-831739\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5dd12123a99b8c59b6480c08ee934cab\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-831739_5dd12123a99b8c59b6480c08ee934cab/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/042c49e25f71fec39896cded838a61c096e65a885c003a7573f15622dc870401/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-831739_kube-system_5dd12123a99b8c59b6480c08ee934cab_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7906a0754e66999c8969abfda8116d427366317abd27c9296da
9a0324ce21421/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7906a0754e66999c8969abfda8116d427366317abd27c9296da9a0324ce21421","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-831739_kube-system_5dd12123a99b8c59b6480c08ee934cab_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5dd12123a99b8c59b6480c08ee934cab/containers/kube-apiserver/6e288733\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5dd12123a99b8c59b6480c08ee934cab/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share
/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-831739","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5dd12123a99b8c59b6480c08ee934cab","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"5dd12123a99b8c59b6480c08ee934cab","kubernetes.io/config.seen":"2025-12-09T05:48:57.778797292Z","kubernete
s.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a/userdata","rootfs":"/var/lib/containers/storage/overlay/88f2f89b02833a290d6fd8e4802628da2a8c66e254a1233cb696059496b1d2b1/merged","created":"2025-12-09T05:49:10.080086056Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99f3a73e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99f3a73e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessa
gePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-09T05:49:09.979077638Z","io.kubernetes.cri-o.Image":"a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.32.0","io.kubernetes.cri-o.ImageRef":"a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-831739\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d64417b3b205bf3546f5f00bd1ab2233\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-831739_d64417b3b205bf3546f5f00bd1ab2233/kube-controller-manager/1.log","io.kubernet
es.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/88f2f89b02833a290d6fd8e4802628da2a8c66e254a1233cb696059496b1d2b1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-831739_kube-system_d64417b3b205bf3546f5f00bd1ab2233_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4e7c6fcdf706876f8348386cf2b911e9d475b3563c1a3090945dc852c7754d4d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4e7c6fcdf706876f8348386cf2b911e9d475b3563c1a3090945dc852c7754d4d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-831739_kube-system_d64417b3b205bf3546f5f00bd1ab2233_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-cert
ificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d64417b3b205bf3546f5f00bd1ab2233/containers/kube-controller-manager/4bc0aab0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d64417b3b205bf3546f5f00bd1ab2233/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\
"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-831739","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d64417b3b205bf3546f5f00bd1ab2233","kubernetes.io/config.hash":"d64417b3b205bf3546f5f00bd1ab2233","kubernetes.io/config.seen":"2025-12-09T05:48:57.778799261Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1209 05:49:13.240254 1795150 cri.go:126] list returned 4 containers
	I1209 05:49:13.240270 1795150 cri.go:129] container: {ID:672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a Status:stopped}
	I1209 05:49:13.240285 1795150 cri.go:135] skipping {672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a stopped}: state = "stopped", want "paused"
	I1209 05:49:13.240297 1795150 cri.go:129] container: {ID:6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e Status:stopped}
	I1209 05:49:13.240311 1795150 cri.go:135] skipping {6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e stopped}: state = "stopped", want "paused"
	I1209 05:49:13.240319 1795150 cri.go:129] container: {ID:84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298 Status:stopped}
	I1209 05:49:13.240324 1795150 cri.go:135] skipping {84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298 stopped}: state = "stopped", want "paused"
	I1209 05:49:13.240328 1795150 cri.go:129] container: {ID:901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a Status:stopped}
	I1209 05:49:13.240333 1795150 cri.go:135] skipping {901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a stopped}: state = "stopped", want "paused"
	I1209 05:49:13.240409 1795150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:49:13.249666 1795150 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:49:13.249697 1795150 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:49:13.249757 1795150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:49:13.258389 1795150 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:49:13.258942 1795150 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-831739" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:49:13.259189 1795150 kubeconfig.go:62] /home/jenkins/minikube-integration/22081-1577059/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-831739" cluster setting kubeconfig missing "running-upgrade-831739" context setting]
	I1209 05:49:13.259576 1795150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:49:13.260285 1795150 kapi.go:59] client config for running-upgrade-831739: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/running-upgrade-831739/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 05:49:13.260814 1795150 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 05:49:13.260833 1795150 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 05:49:13.260839 1795150 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 05:49:13.260843 1795150 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 05:49:13.260847 1795150 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 05:49:13.261126 1795150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:49:13.271059 1795150 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-09 05:48:50.485307528 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-09 05:49:12.529532789 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1209 05:49:13.271090 1795150 kubeadm.go:1161] stopping kube-system containers ...
	I1209 05:49:13.271104 1795150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1209 05:49:13.271206 1795150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:49:13.310076 1795150 cri.go:89] found id: "84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298"
	I1209 05:49:13.310107 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:49:13.310112 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:49:13.310116 1795150 cri.go:89] found id: "901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a"
	I1209 05:49:13.310119 1795150 cri.go:89] found id: ""
	I1209 05:49:13.310125 1795150 cri.go:252] Stopping containers: [84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a 901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a]
	I1209 05:49:13.310219 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:49:13.314120 1795150 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 84bd9635a7fbdff3c6509d9b30656f966b0ccfe9b3b8d3f1ef0ce62293221298 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a 901bf4b0bd4210167d9dc1da489a2070bb2343895c703bd9c04e9c51f1404d7a
	I1209 05:49:13.388244 1795150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 05:49:13.505801 1795150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:49:13.515152 1795150 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5651 Dec  9 05:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  9 05:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  9 05:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  9 05:48 /etc/kubernetes/scheduler.conf
	
	I1209 05:49:13.515245 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:49:13.525154 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:49:13.534366 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:49:13.543598 1795150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:49:13.543701 1795150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:49:13.552576 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:49:13.561800 1795150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:49:13.561902 1795150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:49:13.571072 1795150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:49:13.580540 1795150 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:49:13.629580 1795150 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:49:15.948890 1795150 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.319274582s)
	I1209 05:49:15.948962 1795150 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:49:16.132194 1795150 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:49:16.226001 1795150 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 05:49:16.309070 1795150 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:49:16.309159 1795150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:49:16.809708 1795150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:49:17.310108 1795150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:49:17.330476 1795150 api_server.go:72] duration metric: took 1.02141137s to wait for apiserver process to appear ...
	I1209 05:49:17.330504 1795150 api_server.go:88] waiting for apiserver healthz status ...
	I1209 05:49:17.330527 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:17.648210 1771230 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001282121s
	I1209 05:49:17.648250 1771230 kubeadm.go:319] 
	I1209 05:49:17.648309 1771230 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:49:17.648347 1771230 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:49:17.648455 1771230 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:49:17.648465 1771230 kubeadm.go:319] 
	I1209 05:49:17.648570 1771230 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:49:17.648605 1771230 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:49:17.648639 1771230 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:49:17.648648 1771230 kubeadm.go:319] 
	I1209 05:49:17.652355 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:49:17.652796 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:49:17.652911 1771230 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:49:17.653153 1771230 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:49:17.653163 1771230 kubeadm.go:319] 
	I1209 05:49:17.653233 1771230 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1209 05:49:17.653337 1771230 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001282121s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1209 05:49:17.653423 1771230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 05:49:18.095266 1771230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:49:18.110093 1771230 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:49:18.110162 1771230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:49:18.121779 1771230 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:49:18.121855 1771230 kubeadm.go:158] found existing configuration files:
	
	I1209 05:49:18.121935 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:49:18.131907 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:49:18.132061 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:49:18.140910 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:49:18.150684 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:49:18.150756 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:49:18.160144 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:49:18.170670 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:49:18.170732 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:49:18.179927 1771230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:49:18.190022 1771230 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:49:18.190139 1771230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:49:18.199283 1771230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:49:18.280223 1771230 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1209 05:49:18.282715 1771230 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:49:18.416764 1771230 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:49:18.416913 1771230 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:49:18.416994 1771230 kubeadm.go:319] OS: Linux
	I1209 05:49:18.417078 1771230 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:49:18.417163 1771230 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:49:18.417248 1771230 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:49:18.417331 1771230 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:49:18.417409 1771230 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:49:18.417488 1771230 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:49:18.417579 1771230 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:49:18.417658 1771230 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:49:18.417734 1771230 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:49:18.503349 1771230 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:49:18.503554 1771230 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:49:18.503697 1771230 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:49:18.524434 1771230 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:49:18.530064 1771230 out.go:252]   - Generating certificates and keys ...
	I1209 05:49:18.530229 1771230 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:49:18.530356 1771230 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:49:18.530465 1771230 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:49:18.530547 1771230 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:49:18.530661 1771230 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:49:18.530739 1771230 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:49:18.530825 1771230 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:49:18.530920 1771230 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:49:18.531024 1771230 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:49:18.531150 1771230 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:49:18.531555 1771230 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:49:18.531855 1771230 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:49:18.884690 1771230 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:49:18.976514 1771230 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:49:19.698010 1771230 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:49:20.030584 1771230 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:49:20.539910 1771230 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:49:20.542725 1771230 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:49:20.545873 1771230 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:49:20.549537 1771230 out.go:252]   - Booting up control plane ...
	I1209 05:49:20.549698 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:49:20.549831 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:49:20.549928 1771230 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:49:20.564258 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:49:20.564746 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1209 05:49:20.574152 1771230 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1209 05:49:20.574951 1771230 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:49:20.575174 1771230 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:49:20.712152 1771230 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:49:20.712299 1771230 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:49:22.331758 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:49:22.331800 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:27.332102 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:49:27.332147 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:32.333521 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:49:32.333582 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:37.336887 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:49:37.336932 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:37.795195 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:45502->192.168.76.2:8443: read: connection reset by peer
	I1209 05:49:37.831445 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:37.831870 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:38.331147 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:38.331536 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:38.830747 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:38.831102 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:39.330664 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:39.331108 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:39.830667 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:39.831096 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:40.330652 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:40.331037 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:40.830662 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:40.831086 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:41.330663 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:41.331123 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:41.830650 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:41.831012 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:42.330700 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:42.331229 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:42.831063 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:42.831523 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:43.331206 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:43.331598 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:43.831248 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:43.831729 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:44.331343 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:44.331777 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:44.831463 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:44.831847 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:45.331601 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:45.332104 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:45.830659 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:45.831080 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:46.330666 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:46.331124 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:46.830644 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:46.831056 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:47.330740 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:47.331199 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:47.830724 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:47.831140 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:48.330705 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:48.331169 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:48.830663 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:48.831091 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:49.330658 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:49.331068 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:49.830742 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:49.831185 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:50.330673 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:50.331141 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:50.830667 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:50.831043 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:51.330681 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:51.331157 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:51.830652 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:51.831033 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:52.331231 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:52.331606 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:52.831454 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:52.831851 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:53.330661 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:53.331095 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:53.830688 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:53.831120 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:54.330675 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:54.331109 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:54.830665 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:54.831088 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:55.330666 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:55.331118 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:55.830650 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:55.831062 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:56.330667 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:49:56.331029 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:49:56.830644 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:01.834694 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:50:01.834741 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:06.836906 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:50:06.836947 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:11.838704 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:50:11.838754 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:16.839028 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:50:16.839074 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:17.342463 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33570->192.168.76.2:8443: read: connection reset by peer
	I1209 05:50:17.342537 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:17.342623 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:17.385822 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:17.385846 1795150 cri.go:89] found id: "ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d"
	I1209 05:50:17.385851 1795150 cri.go:89] found id: ""
	I1209 05:50:17.385859 1795150 logs.go:282] 2 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97 ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d]
	I1209 05:50:17.385918 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.389657 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.393232 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:17.393313 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:17.436696 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:17.436720 1795150 cri.go:89] found id: ""
	I1209 05:50:17.436729 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:17.436791 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.441861 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:17.441932 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:17.483101 1795150 cri.go:89] found id: ""
	I1209 05:50:17.483128 1795150 logs.go:282] 0 containers: []
	W1209 05:50:17.483137 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:17.483144 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:17.483205 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:17.522320 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:17.522342 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:17.522347 1795150 cri.go:89] found id: ""
	I1209 05:50:17.522355 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:17.522410 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.525967 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.529338 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:17.529417 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:17.567083 1795150 cri.go:89] found id: ""
	I1209 05:50:17.567110 1795150 logs.go:282] 0 containers: []
	W1209 05:50:17.567119 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:17.567126 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:17.567188 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:17.606119 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:17.606142 1795150 cri.go:89] found id: "71b850138e3a8b2adf3000d2b00480c1ef826a1ed0432fc96d334a9f588f225b"
	I1209 05:50:17.606147 1795150 cri.go:89] found id: ""
	I1209 05:50:17.606155 1795150 logs.go:282] 2 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5 71b850138e3a8b2adf3000d2b00480c1ef826a1ed0432fc96d334a9f588f225b]
	I1209 05:50:17.606222 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.609941 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:17.613516 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:17.613604 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:17.653190 1795150 cri.go:89] found id: ""
	I1209 05:50:17.653215 1795150 logs.go:282] 0 containers: []
	W1209 05:50:17.653224 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:17.653232 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:17.653323 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:17.689342 1795150 cri.go:89] found id: ""
	I1209 05:50:17.689409 1795150 logs.go:282] 0 containers: []
	W1209 05:50:17.689424 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:17.689434 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:17.689446 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:17.761316 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:17.761337 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:17.761350 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:17.805307 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:17.805338 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:17.878904 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:17.878944 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:17.975587 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:17.975627 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:18.014792 1795150 logs.go:123] Gathering logs for kube-controller-manager [71b850138e3a8b2adf3000d2b00480c1ef826a1ed0432fc96d334a9f588f225b] ...
	I1209 05:50:18.014880 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71b850138e3a8b2adf3000d2b00480c1ef826a1ed0432fc96d334a9f588f225b"
	I1209 05:50:18.059844 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:18.059873 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:18.105776 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:18.105810 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:18.215191 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:18.215229 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:18.234127 1795150 logs.go:123] Gathering logs for kube-apiserver [ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d] ...
	I1209 05:50:18.234164 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d"
	W1209 05:50:18.272159 1795150 logs.go:130] failed kube-apiserver [ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:50:18.268540    2868 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d\": container with ID starting with ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d not found: ID does not exist" containerID="ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d"
	time="2025-12-09T05:50:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d\": container with ID starting with ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d not found: ID does not exist"
	 output: 
	** stderr ** 
	E1209 05:50:18.268540    2868 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d\": container with ID starting with ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d not found: ID does not exist" containerID="ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d"
	time="2025-12-09T05:50:18Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d\": container with ID starting with ace5d91b658894547a23309aa2fd6a63ccf63061cac9e1b16cd51ef51c1fb56d not found: ID does not exist"
	
	** /stderr **
	I1209 05:50:18.272184 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:18.272199 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:18.309970 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:18.310001 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:20.874745 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:20.875178 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:50:20.875228 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:20.875286 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:20.923361 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:20.923384 1795150 cri.go:89] found id: ""
	I1209 05:50:20.923393 1795150 logs.go:282] 1 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:20.923458 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:20.927206 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:20.927285 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:20.968021 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:20.968048 1795150 cri.go:89] found id: ""
	I1209 05:50:20.968058 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:20.968133 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:20.971723 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:20.971795 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:21.014371 1795150 cri.go:89] found id: ""
	I1209 05:50:21.014401 1795150 logs.go:282] 0 containers: []
	W1209 05:50:21.014411 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:21.014418 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:21.014504 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:21.056325 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:21.056349 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:21.056354 1795150 cri.go:89] found id: ""
	I1209 05:50:21.056363 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:21.056424 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:21.060106 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:21.063961 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:21.064040 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:21.102989 1795150 cri.go:89] found id: ""
	I1209 05:50:21.103011 1795150 logs.go:282] 0 containers: []
	W1209 05:50:21.103022 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:21.103029 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:21.103089 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:21.142784 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:21.142862 1795150 cri.go:89] found id: ""
	I1209 05:50:21.142888 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:21.142955 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:21.146718 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:21.146809 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:21.187178 1795150 cri.go:89] found id: ""
	I1209 05:50:21.187214 1795150 logs.go:282] 0 containers: []
	W1209 05:50:21.187223 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:21.187230 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:21.187294 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:21.225382 1795150 cri.go:89] found id: ""
	I1209 05:50:21.225406 1795150 logs.go:282] 0 containers: []
	W1209 05:50:21.225415 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:21.225431 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:21.225448 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:21.296910 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:21.296933 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:21.296953 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:21.340724 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:21.340759 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:21.381677 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:21.381746 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:21.423622 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:21.423652 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:21.478051 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:21.478078 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:21.592181 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:21.592263 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:21.612493 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:21.612577 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:21.676442 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:21.676474 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:21.747923 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:21.747960 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:24.311473 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:24.311938 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:50:24.311988 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:24.312045 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:24.354262 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:24.354286 1795150 cri.go:89] found id: ""
	I1209 05:50:24.354295 1795150 logs.go:282] 1 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:24.354353 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:24.357965 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:24.358038 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:24.398350 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:24.398373 1795150 cri.go:89] found id: ""
	I1209 05:50:24.398383 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:24.398441 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:24.402174 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:24.402265 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:24.440169 1795150 cri.go:89] found id: ""
	I1209 05:50:24.440196 1795150 logs.go:282] 0 containers: []
	W1209 05:50:24.440205 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:24.440213 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:24.440270 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:24.479468 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:24.479490 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:24.479495 1795150 cri.go:89] found id: ""
	I1209 05:50:24.479503 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:24.479565 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:24.483546 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:24.487879 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:24.487959 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:24.527837 1795150 cri.go:89] found id: ""
	I1209 05:50:24.527861 1795150 logs.go:282] 0 containers: []
	W1209 05:50:24.527871 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:24.527878 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:24.527949 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:24.568036 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:24.568067 1795150 cri.go:89] found id: ""
	I1209 05:50:24.568076 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:24.568142 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:24.573128 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:24.573205 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:24.623865 1795150 cri.go:89] found id: ""
	I1209 05:50:24.623940 1795150 logs.go:282] 0 containers: []
	W1209 05:50:24.623962 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:24.623982 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:24.624070 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:24.672349 1795150 cri.go:89] found id: ""
	I1209 05:50:24.672375 1795150 logs.go:282] 0 containers: []
	W1209 05:50:24.672384 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:24.672399 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:24.672433 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:24.747071 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:24.747109 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:24.788047 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:24.788082 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:24.851045 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:24.851089 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:24.925162 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:24.925182 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:24.925195 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:24.970763 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:24.970815 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:25.020210 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:25.020253 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:25.075338 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:25.075368 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:25.195603 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:25.195642 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:25.215725 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:25.215752 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:27.761760 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:27.762257 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:50:27.762342 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:27.762403 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:27.799662 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:27.799682 1795150 cri.go:89] found id: ""
	I1209 05:50:27.799690 1795150 logs.go:282] 1 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:27.799766 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:27.803228 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:27.803337 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:27.846783 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:27.846803 1795150 cri.go:89] found id: ""
	I1209 05:50:27.846811 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:27.846867 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:27.850444 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:27.850528 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:27.889870 1795150 cri.go:89] found id: ""
	I1209 05:50:27.889897 1795150 logs.go:282] 0 containers: []
	W1209 05:50:27.889906 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:27.889913 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:27.889973 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:27.932813 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:27.932835 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:27.932841 1795150 cri.go:89] found id: ""
	I1209 05:50:27.932848 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:27.932904 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:27.936537 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:27.939901 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:27.939998 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:27.979023 1795150 cri.go:89] found id: ""
	I1209 05:50:27.979094 1795150 logs.go:282] 0 containers: []
	W1209 05:50:27.979109 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:27.979120 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:27.979188 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:28.024269 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:28.024299 1795150 cri.go:89] found id: ""
	I1209 05:50:28.024309 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:28.024389 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:28.028908 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:28.029040 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:28.072693 1795150 cri.go:89] found id: ""
	I1209 05:50:28.072728 1795150 logs.go:282] 0 containers: []
	W1209 05:50:28.072738 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:28.072745 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:28.072816 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:28.118087 1795150 cri.go:89] found id: ""
	I1209 05:50:28.118111 1795150 logs.go:282] 0 containers: []
	W1209 05:50:28.118120 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:28.118136 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:28.118148 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:28.191965 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:28.191985 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:28.191999 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:28.236232 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:28.236265 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:28.287827 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:28.287858 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:28.330935 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:28.330965 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:28.398911 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:28.398946 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:28.441680 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:28.441709 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:28.561300 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:28.561336 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:28.580388 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:28.580418 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:28.655869 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:28.655906 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:31.195544 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:31.196022 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:50:31.196100 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:31.196164 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:31.236505 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:31.236527 1795150 cri.go:89] found id: ""
	I1209 05:50:31.236536 1795150 logs.go:282] 1 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:31.236593 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:31.240161 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:31.240235 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:31.277280 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:31.277314 1795150 cri.go:89] found id: ""
	I1209 05:50:31.277324 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:31.277391 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:31.281158 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:31.281261 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:31.320119 1795150 cri.go:89] found id: ""
	I1209 05:50:31.320146 1795150 logs.go:282] 0 containers: []
	W1209 05:50:31.320154 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:31.320161 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:31.320266 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:31.380356 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:31.380383 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:31.380389 1795150 cri.go:89] found id: ""
	I1209 05:50:31.380397 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:31.380467 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:31.384869 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:31.388716 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:31.388801 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:31.427565 1795150 cri.go:89] found id: ""
	I1209 05:50:31.427590 1795150 logs.go:282] 0 containers: []
	W1209 05:50:31.427599 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:31.427606 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:31.427703 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:31.465609 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:31.465673 1795150 cri.go:89] found id: ""
	I1209 05:50:31.465688 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:31.465754 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:31.469377 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:31.469450 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:31.513356 1795150 cri.go:89] found id: ""
	I1209 05:50:31.513379 1795150 logs.go:282] 0 containers: []
	W1209 05:50:31.513387 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:31.513394 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:31.513454 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:31.553177 1795150 cri.go:89] found id: ""
	I1209 05:50:31.553203 1795150 logs.go:282] 0 containers: []
	W1209 05:50:31.553213 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:31.553269 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:31.553288 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:31.674502 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:31.674539 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:31.720423 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:31.720453 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:31.768344 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:31.768377 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:31.810364 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:31.810394 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:31.859264 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:31.859297 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:31.918529 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:31.918563 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:31.961435 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:31.961518 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:31.979557 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:31.979587 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:32.049850 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:32.049869 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:32.049884 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:34.671442 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:34.672021 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:50:34.672072 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:34.672129 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:34.711254 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:34.711274 1795150 cri.go:89] found id: ""
	I1209 05:50:34.711283 1795150 logs.go:282] 1 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:34.711338 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:34.714877 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:34.714950 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:34.751444 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:34.751467 1795150 cri.go:89] found id: ""
	I1209 05:50:34.751476 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:34.751530 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:34.755021 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:34.755097 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:34.792676 1795150 cri.go:89] found id: ""
	I1209 05:50:34.792701 1795150 logs.go:282] 0 containers: []
	W1209 05:50:34.792710 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:34.792717 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:34.792777 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:34.837357 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:34.837382 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:34.837388 1795150 cri.go:89] found id: ""
	I1209 05:50:34.837396 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:34.837452 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:34.841149 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:34.844421 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:34.844516 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:34.882215 1795150 cri.go:89] found id: ""
	I1209 05:50:34.882281 1795150 logs.go:282] 0 containers: []
	W1209 05:50:34.882307 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:34.882329 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:34.882410 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:34.923606 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:34.923669 1795150 cri.go:89] found id: ""
	I1209 05:50:34.923693 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:34.923764 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:34.927312 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:34.927403 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:34.963575 1795150 cri.go:89] found id: ""
	I1209 05:50:34.963638 1795150 logs.go:282] 0 containers: []
	W1209 05:50:34.963653 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:34.963659 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:34.963736 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:35.012601 1795150 cri.go:89] found id: ""
	I1209 05:50:35.012636 1795150 logs.go:282] 0 containers: []
	W1209 05:50:35.012647 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:35.012692 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:35.012717 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:35.096122 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:35.096166 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:35.163470 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:35.163511 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:35.208138 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:35.208170 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:35.227756 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:35.227786 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:35.269744 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:35.269775 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:35.322591 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:35.322621 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:35.361111 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:35.361140 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:35.399077 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:35.399105 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:35.513560 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:35.513634 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:35.592633 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:38.093653 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:38.094183 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:50:38.094268 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:38.094387 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:38.144033 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:38.144069 1795150 cri.go:89] found id: ""
	I1209 05:50:38.144079 1795150 logs.go:282] 1 containers: [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:38.144176 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:38.147949 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:38.148023 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:38.188042 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:38.188065 1795150 cri.go:89] found id: ""
	I1209 05:50:38.188075 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:38.188129 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:38.191757 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:38.191833 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:38.229828 1795150 cri.go:89] found id: ""
	I1209 05:50:38.229898 1795150 logs.go:282] 0 containers: []
	W1209 05:50:38.229923 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:38.229942 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:38.230032 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:38.267044 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:38.267065 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:38.267071 1795150 cri.go:89] found id: ""
	I1209 05:50:38.267079 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:38.267178 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:38.270961 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:38.274312 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:38.274386 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:38.312907 1795150 cri.go:89] found id: ""
	I1209 05:50:38.312934 1795150 logs.go:282] 0 containers: []
	W1209 05:50:38.312943 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:38.312950 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:38.313013 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:38.351957 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:38.351977 1795150 cri.go:89] found id: ""
	I1209 05:50:38.351985 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:38.352044 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:38.355832 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:38.355958 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:38.396180 1795150 cri.go:89] found id: ""
	I1209 05:50:38.396205 1795150 logs.go:282] 0 containers: []
	W1209 05:50:38.396214 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:38.396221 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:38.396286 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:38.435662 1795150 cri.go:89] found id: ""
	I1209 05:50:38.435690 1795150 logs.go:282] 0 containers: []
	W1209 05:50:38.435700 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:38.435735 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:38.435754 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:50:38.511021 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:50:38.511041 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:38.511054 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:38.552490 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:38.552561 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:38.570256 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:38.570344 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:38.611599 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:38.611632 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:38.656124 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:38.656161 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:38.735908 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:38.735951 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:38.774785 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:38.774816 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:38.843713 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:38.843796 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:38.892797 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:38.892822 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:41.507767 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:50:46.509014 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:50:46.509104 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:50:46.509189 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:50:46.546909 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:50:46.546931 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:46.546935 1795150 cri.go:89] found id: ""
	I1209 05:50:46.546942 1795150 logs.go:282] 2 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:50:46.547004 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:46.550767 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:46.554276 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:50:46.554351 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:50:46.594985 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:46.595061 1795150 cri.go:89] found id: ""
	I1209 05:50:46.595085 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:50:46.595177 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:46.599006 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:50:46.599084 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:50:46.636055 1795150 cri.go:89] found id: ""
	I1209 05:50:46.636079 1795150 logs.go:282] 0 containers: []
	W1209 05:50:46.636090 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:50:46.636096 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:50:46.636203 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:50:46.673363 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:46.673386 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:46.673392 1795150 cri.go:89] found id: ""
	I1209 05:50:46.673400 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:50:46.673509 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:46.677400 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:46.681169 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:50:46.681251 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:50:46.727598 1795150 cri.go:89] found id: ""
	I1209 05:50:46.727621 1795150 logs.go:282] 0 containers: []
	W1209 05:50:46.727630 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:50:46.727637 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:50:46.727698 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:50:46.766674 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:46.766698 1795150 cri.go:89] found id: ""
	I1209 05:50:46.766707 1795150 logs.go:282] 1 containers: [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:50:46.766764 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:50:46.770141 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:50:46.770235 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:50:46.807958 1795150 cri.go:89] found id: ""
	I1209 05:50:46.807984 1795150 logs.go:282] 0 containers: []
	W1209 05:50:46.807994 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:50:46.808000 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:50:46.808062 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:50:46.852196 1795150 cri.go:89] found id: ""
	I1209 05:50:46.852224 1795150 logs.go:282] 0 containers: []
	W1209 05:50:46.852232 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:50:46.852242 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:50:46.852254 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:50:46.966423 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:50:46.966459 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:50:46.986765 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:50:46.986938 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 05:50:57.065902 1795150 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.078933586s)
	W1209 05:50:57.065943 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1209 05:50:57.065951 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:50:57.065962 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:50:57.105707 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:50:57.105740 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:50:57.148468 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:50:57.148498 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:50:57.192477 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:50:57.192509 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:50:57.244731 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:50:57.244764 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:50:57.323918 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:50:57.323957 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:50:57.365495 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:50:57.365526 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:50:57.435814 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:50:57.435849 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:50:59.996674 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:00.820772 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:45452->192.168.76.2:8443: read: connection reset by peer
	I1209 05:51:00.820830 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:00.820897 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:00.870407 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:00.870429 1795150 cri.go:89] found id: "30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:51:00.870434 1795150 cri.go:89] found id: ""
	I1209 05:51:00.870442 1795150 logs.go:282] 2 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97]
	I1209 05:51:00.870501 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:00.874454 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:00.878140 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:00.878219 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:00.916790 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:00.916858 1795150 cri.go:89] found id: ""
	I1209 05:51:00.916873 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:00.916944 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:00.920555 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:00.920635 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:00.962858 1795150 cri.go:89] found id: ""
	I1209 05:51:00.962881 1795150 logs.go:282] 0 containers: []
	W1209 05:51:00.962890 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:00.962896 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:00.962956 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:01.001413 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:01.001436 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:01.001441 1795150 cri.go:89] found id: ""
	I1209 05:51:01.001449 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:01.001508 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:01.005526 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:01.010411 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:01.010659 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:01.051427 1795150 cri.go:89] found id: ""
	I1209 05:51:01.051508 1795150 logs.go:282] 0 containers: []
	W1209 05:51:01.051549 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:01.051570 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:01.051663 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:01.091938 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:01.091962 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:01.091968 1795150 cri.go:89] found id: ""
	I1209 05:51:01.091976 1795150 logs.go:282] 2 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:51:01.092059 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:01.096015 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:01.099786 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:01.099881 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:01.140080 1795150 cri.go:89] found id: ""
	I1209 05:51:01.140167 1795150 logs.go:282] 0 containers: []
	W1209 05:51:01.140192 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:01.140209 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:01.140296 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:01.180913 1795150 cri.go:89] found id: ""
	I1209 05:51:01.180946 1795150 logs.go:282] 0 containers: []
	W1209 05:51:01.180955 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:01.180967 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:01.180985 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:01.261221 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:01.261264 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:01.331943 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:01.331990 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:01.404679 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:01.404709 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:01.445982 1795150 logs.go:123] Gathering logs for kube-apiserver [30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97] ...
	I1209 05:51:01.446011 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30a1ab31bef6f11b5f5b7208559c0b8d26188979cc88b25c86811dcdff601b97"
	I1209 05:51:01.490901 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:01.490933 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:01.548037 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:01.548072 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:01.589763 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:01.589795 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:01.631078 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:51:01.631109 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:01.670365 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:01.670393 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:01.784896 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:01.784932 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:01.804286 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:01.804370 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:01.882355 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:04.382660 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:04.383155 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:04.383203 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:04.383263 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:04.421761 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:04.421784 1795150 cri.go:89] found id: ""
	I1209 05:51:04.421792 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:04.421858 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:04.425699 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:04.425776 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:04.464936 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:04.465010 1795150 cri.go:89] found id: ""
	I1209 05:51:04.465024 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:04.465097 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:04.468925 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:04.469016 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:04.512061 1795150 cri.go:89] found id: ""
	I1209 05:51:04.512087 1795150 logs.go:282] 0 containers: []
	W1209 05:51:04.512096 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:04.512103 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:04.512175 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:04.551721 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:04.551744 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:04.551750 1795150 cri.go:89] found id: ""
	I1209 05:51:04.551758 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:04.551817 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:04.555510 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:04.558924 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:04.558997 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:04.600441 1795150 cri.go:89] found id: ""
	I1209 05:51:04.600467 1795150 logs.go:282] 0 containers: []
	W1209 05:51:04.600477 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:04.600489 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:04.600559 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:04.638691 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:04.638715 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:04.638721 1795150 cri.go:89] found id: ""
	I1209 05:51:04.638730 1795150 logs.go:282] 2 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:51:04.638793 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:04.642641 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:04.646325 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:04.646423 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:04.686884 1795150 cri.go:89] found id: ""
	I1209 05:51:04.686918 1795150 logs.go:282] 0 containers: []
	W1209 05:51:04.686928 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:04.686935 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:04.687010 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:04.726768 1795150 cri.go:89] found id: ""
	I1209 05:51:04.726796 1795150 logs.go:282] 0 containers: []
	W1209 05:51:04.726804 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:04.726814 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:04.726827 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:04.744955 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:04.744988 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:04.817617 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:04.817640 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:04.817655 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:04.859915 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:04.859945 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:04.906030 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:04.906063 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:05.010965 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:51:05.011016 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:05.056504 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:05.056537 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:05.138921 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:05.138955 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:05.268327 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:05.268373 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:05.308184 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:05.308217 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:05.347807 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:05.347838 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:07.913916 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:07.914370 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:07.914424 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:07.914494 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:07.956574 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:07.956597 1795150 cri.go:89] found id: ""
	I1209 05:51:07.956606 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:07.956685 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:07.960254 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:07.960348 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:07.998232 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:07.998255 1795150 cri.go:89] found id: ""
	I1209 05:51:07.998264 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:07.998351 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:08.001958 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:08.002058 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:08.042999 1795150 cri.go:89] found id: ""
	I1209 05:51:08.043026 1795150 logs.go:282] 0 containers: []
	W1209 05:51:08.043036 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:08.043043 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:08.043151 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:08.098382 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:08.098459 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:08.098492 1795150 cri.go:89] found id: ""
	I1209 05:51:08.098521 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:08.098631 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:08.103226 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:08.107689 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:08.107821 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:08.156347 1795150 cri.go:89] found id: ""
	I1209 05:51:08.156370 1795150 logs.go:282] 0 containers: []
	W1209 05:51:08.156379 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:08.156386 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:08.156447 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:08.199883 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:08.199921 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:08.199926 1795150 cri.go:89] found id: ""
	I1209 05:51:08.199934 1795150 logs.go:282] 2 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:51:08.200004 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:08.203795 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:08.207357 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:08.207452 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:08.246366 1795150 cri.go:89] found id: ""
	I1209 05:51:08.246393 1795150 logs.go:282] 0 containers: []
	W1209 05:51:08.246402 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:08.246409 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:08.246470 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:08.290904 1795150 cri.go:89] found id: ""
	I1209 05:51:08.290927 1795150 logs.go:282] 0 containers: []
	W1209 05:51:08.290935 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:08.290944 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:08.290959 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:08.405596 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:08.405633 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:08.475627 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:08.475650 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:08.475665 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:08.522627 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:08.522657 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:08.570186 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:08.570220 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:08.610251 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:51:08.610281 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:08.649932 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:08.649962 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:08.716743 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:08.716781 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:08.759083 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:08.759112 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:08.776923 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:08.776959 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:08.867285 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:08.867325 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:11.409366 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:11.409834 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:11.409880 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:11.409940 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:11.451108 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:11.451126 1795150 cri.go:89] found id: ""
	I1209 05:51:11.451134 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:11.451197 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:11.454730 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:11.454871 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:11.493079 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:11.493101 1795150 cri.go:89] found id: ""
	I1209 05:51:11.493109 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:11.493168 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:11.496780 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:11.496874 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:11.537897 1795150 cri.go:89] found id: ""
	I1209 05:51:11.537920 1795150 logs.go:282] 0 containers: []
	W1209 05:51:11.537928 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:11.537934 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:11.538004 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:11.576799 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:11.576818 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:11.576824 1795150 cri.go:89] found id: ""
	I1209 05:51:11.576832 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:11.576892 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:11.580546 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:11.583909 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:11.583984 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:11.623341 1795150 cri.go:89] found id: ""
	I1209 05:51:11.623364 1795150 logs.go:282] 0 containers: []
	W1209 05:51:11.623372 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:11.623380 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:11.623446 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:11.661582 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:11.661606 1795150 cri.go:89] found id: "0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:11.661611 1795150 cri.go:89] found id: ""
	I1209 05:51:11.661622 1795150 logs.go:282] 2 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5]
	I1209 05:51:11.661680 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:11.665235 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:11.668803 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:11.668882 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:11.706314 1795150 cri.go:89] found id: ""
	I1209 05:51:11.706335 1795150 logs.go:282] 0 containers: []
	W1209 05:51:11.706344 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:11.706350 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:11.706419 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:11.744454 1795150 cri.go:89] found id: ""
	I1209 05:51:11.744481 1795150 logs.go:282] 0 containers: []
	W1209 05:51:11.744490 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:11.744500 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:11.744512 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:11.810564 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:11.810615 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:11.899121 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:11.899201 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:11.923270 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:11.923301 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:11.997460 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:11.997480 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:11.997497 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:12.082507 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:12.082546 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:12.121781 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:12.121813 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:12.159850 1795150 logs.go:123] Gathering logs for kube-controller-manager [0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5] ...
	I1209 05:51:12.159878 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e9b69d74f1f37895c2246254951ba0a24b2dabea25502a18616876b7fed7da5"
	I1209 05:51:12.201150 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:12.201181 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:12.316160 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:12.316197 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:12.359002 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:12.359033 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:14.905560 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:14.906033 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:14.906083 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:14.906150 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:14.947575 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:14.947596 1795150 cri.go:89] found id: ""
	I1209 05:51:14.947605 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:14.947663 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:14.951407 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:14.951474 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:14.988969 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:14.988992 1795150 cri.go:89] found id: ""
	I1209 05:51:14.989003 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:14.989060 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:14.992723 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:14.992809 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:15.041291 1795150 cri.go:89] found id: ""
	I1209 05:51:15.041316 1795150 logs.go:282] 0 containers: []
	W1209 05:51:15.041336 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:15.041344 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:15.041410 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:15.080759 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:15.080782 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:15.080787 1795150 cri.go:89] found id: ""
	I1209 05:51:15.080795 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:15.080857 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:15.084833 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:15.088714 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:15.088790 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:15.127557 1795150 cri.go:89] found id: ""
	I1209 05:51:15.127625 1795150 logs.go:282] 0 containers: []
	W1209 05:51:15.127648 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:15.127664 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:15.127747 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:15.165421 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:15.165446 1795150 cri.go:89] found id: ""
	I1209 05:51:15.165455 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:15.165516 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:15.169310 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:15.169387 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:15.207745 1795150 cri.go:89] found id: ""
	I1209 05:51:15.207772 1795150 logs.go:282] 0 containers: []
	W1209 05:51:15.207781 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:15.207788 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:15.207853 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:15.245063 1795150 cri.go:89] found id: ""
	I1209 05:51:15.245089 1795150 logs.go:282] 0 containers: []
	W1209 05:51:15.245098 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:15.245122 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:15.245135 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:15.263288 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:15.263317 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:15.317926 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:15.317958 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:15.400593 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:15.400630 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:15.438119 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:15.438149 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:15.502090 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:15.502127 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:15.545492 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:15.545522 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:15.664669 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:15.664707 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:15.730796 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:15.730817 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:15.730830 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:15.775962 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:15.775993 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:18.317553 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:18.318017 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:18.318067 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:18.318137 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:18.356544 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:18.356568 1795150 cri.go:89] found id: ""
	I1209 05:51:18.356577 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:18.356640 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:18.360579 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:18.360652 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:18.398133 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:18.398197 1795150 cri.go:89] found id: ""
	I1209 05:51:18.398220 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:18.398304 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:18.401791 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:18.401897 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:18.438450 1795150 cri.go:89] found id: ""
	I1209 05:51:18.438519 1795150 logs.go:282] 0 containers: []
	W1209 05:51:18.438542 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:18.438560 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:18.438658 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:18.476892 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:18.476915 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:18.476920 1795150 cri.go:89] found id: ""
	I1209 05:51:18.476939 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:18.477018 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:18.480888 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:18.484514 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:18.484584 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:18.522090 1795150 cri.go:89] found id: ""
	I1209 05:51:18.522115 1795150 logs.go:282] 0 containers: []
	W1209 05:51:18.522124 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:18.522131 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:18.522194 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:18.559847 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:18.559869 1795150 cri.go:89] found id: ""
	I1209 05:51:18.559879 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:18.559932 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:18.563330 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:18.563403 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:18.612727 1795150 cri.go:89] found id: ""
	I1209 05:51:18.612767 1795150 logs.go:282] 0 containers: []
	W1209 05:51:18.612776 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:18.612783 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:18.612856 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:18.655036 1795150 cri.go:89] found id: ""
	I1209 05:51:18.655058 1795150 logs.go:282] 0 containers: []
	W1209 05:51:18.655066 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:18.655080 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:18.655092 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:18.778649 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:18.778691 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:18.798441 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:18.798470 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:18.866481 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:18.866501 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:18.866514 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:18.917857 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:18.917888 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:18.957621 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:18.957700 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:19.022641 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:19.022679 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:19.065828 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:19.065858 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:19.162466 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:19.162501 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:19.199927 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:19.199954 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:21.745049 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:21.745537 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:21.745602 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:21.745660 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:21.783732 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:21.783751 1795150 cri.go:89] found id: ""
	I1209 05:51:21.783759 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:21.783815 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:21.787286 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:21.787358 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:21.826875 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:21.826896 1795150 cri.go:89] found id: ""
	I1209 05:51:21.826904 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:21.826962 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:21.830939 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:21.831011 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:21.870286 1795150 cri.go:89] found id: ""
	I1209 05:51:21.870313 1795150 logs.go:282] 0 containers: []
	W1209 05:51:21.870323 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:21.870329 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:21.870391 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:21.907715 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:21.907736 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:21.907741 1795150 cri.go:89] found id: ""
	I1209 05:51:21.907749 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:21.907809 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:21.911475 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:21.915092 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:21.915166 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:21.952877 1795150 cri.go:89] found id: ""
	I1209 05:51:21.952902 1795150 logs.go:282] 0 containers: []
	W1209 05:51:21.952910 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:21.952916 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:21.952976 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:21.992575 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:21.992598 1795150 cri.go:89] found id: ""
	I1209 05:51:21.992607 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:21.992667 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:21.996174 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:21.996246 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:22.036201 1795150 cri.go:89] found id: ""
	I1209 05:51:22.036224 1795150 logs.go:282] 0 containers: []
	W1209 05:51:22.036233 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:22.036239 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:22.036303 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:22.076051 1795150 cri.go:89] found id: ""
	I1209 05:51:22.076075 1795150 logs.go:282] 0 containers: []
	W1209 05:51:22.076084 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:22.076098 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:22.076110 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:22.190870 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:22.190904 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:22.209918 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:22.209949 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:22.269889 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:22.269921 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:22.313255 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:22.313871 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:22.394008 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:22.394813 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:22.447756 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:22.447783 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:22.522417 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:22.522435 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:22.522448 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:22.565440 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:22.565470 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:22.658139 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:22.658176 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:25.200936 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:25.201503 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:25.201559 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:25.201626 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:25.238281 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:25.238304 1795150 cri.go:89] found id: ""
	I1209 05:51:25.238312 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:25.238374 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:25.241952 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:25.242023 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:25.286596 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:25.286615 1795150 cri.go:89] found id: ""
	I1209 05:51:25.286623 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:25.286686 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:25.290525 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:25.290622 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:25.341707 1795150 cri.go:89] found id: ""
	I1209 05:51:25.341754 1795150 logs.go:282] 0 containers: []
	W1209 05:51:25.341791 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:25.341803 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:25.341881 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:25.401659 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:25.401695 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:25.401701 1795150 cri.go:89] found id: ""
	I1209 05:51:25.401716 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:25.401800 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:25.405352 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:25.409011 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:25.409089 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:25.447219 1795150 cri.go:89] found id: ""
	I1209 05:51:25.447241 1795150 logs.go:282] 0 containers: []
	W1209 05:51:25.447249 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:25.447255 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:25.447313 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:25.483837 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:25.483860 1795150 cri.go:89] found id: ""
	I1209 05:51:25.483868 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:25.483924 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:25.487556 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:25.487666 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:25.524645 1795150 cri.go:89] found id: ""
	I1209 05:51:25.524672 1795150 logs.go:282] 0 containers: []
	W1209 05:51:25.524681 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:25.524687 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:25.524747 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:25.568262 1795150 cri.go:89] found id: ""
	I1209 05:51:25.568289 1795150 logs.go:282] 0 containers: []
	W1209 05:51:25.568298 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:25.568313 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:25.568325 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:25.613467 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:25.613500 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:25.677197 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:25.677233 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:25.718981 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:25.719012 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:25.837909 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:25.837946 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:25.856515 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:25.856550 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:25.895756 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:25.895796 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:25.966599 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:25.966624 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:25.966637 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:26.010502 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:26.010541 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:26.057466 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:26.057502 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:28.657819 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:28.658281 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:28.658334 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:28.658392 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:28.697265 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:28.697288 1795150 cri.go:89] found id: ""
	I1209 05:51:28.697297 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:28.697372 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:28.701381 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:28.701455 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:28.742638 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:28.742664 1795150 cri.go:89] found id: ""
	I1209 05:51:28.742672 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:28.742728 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:28.746159 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:28.746229 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:28.782378 1795150 cri.go:89] found id: ""
	I1209 05:51:28.782402 1795150 logs.go:282] 0 containers: []
	W1209 05:51:28.782410 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:28.782417 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:28.782474 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:28.819945 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:28.819968 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:28.819973 1795150 cri.go:89] found id: ""
	I1209 05:51:28.819980 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:28.820042 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:28.823740 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:28.827061 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:28.827132 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:28.864218 1795150 cri.go:89] found id: ""
	I1209 05:51:28.864241 1795150 logs.go:282] 0 containers: []
	W1209 05:51:28.864249 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:28.864256 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:28.864322 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:28.903535 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:28.903555 1795150 cri.go:89] found id: ""
	I1209 05:51:28.903564 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:28.903620 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:28.907267 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:28.907338 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:28.947193 1795150 cri.go:89] found id: ""
	I1209 05:51:28.947226 1795150 logs.go:282] 0 containers: []
	W1209 05:51:28.947235 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:28.947242 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:28.947302 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:28.983275 1795150 cri.go:89] found id: ""
	I1209 05:51:28.983300 1795150 logs.go:282] 0 containers: []
	W1209 05:51:28.983309 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:28.983323 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:28.983334 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:29.069451 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:29.069489 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:29.121090 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:29.121121 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:29.191580 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:29.191620 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:29.317339 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:29.317380 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:29.385633 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:29.385657 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:29.385670 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:29.436822 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:29.436856 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:29.474385 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:29.474416 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:29.517774 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:29.517804 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:29.535567 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:29.535601 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:32.082643 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:32.083069 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:32.083125 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:32.083188 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:32.134622 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:32.134647 1795150 cri.go:89] found id: ""
	I1209 05:51:32.134656 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:32.134716 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:32.138911 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:32.139038 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:32.177488 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:32.177516 1795150 cri.go:89] found id: ""
	I1209 05:51:32.177525 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:32.177595 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:32.181211 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:32.181308 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:32.224498 1795150 cri.go:89] found id: ""
	I1209 05:51:32.224525 1795150 logs.go:282] 0 containers: []
	W1209 05:51:32.224536 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:32.224544 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:32.224610 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:32.261565 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:32.261597 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:32.261602 1795150 cri.go:89] found id: ""
	I1209 05:51:32.261609 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:32.261691 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:32.265457 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:32.270562 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:32.270698 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:32.310413 1795150 cri.go:89] found id: ""
	I1209 05:51:32.310498 1795150 logs.go:282] 0 containers: []
	W1209 05:51:32.310528 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:32.310547 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:32.310660 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:32.349307 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:32.349377 1795150 cri.go:89] found id: ""
	I1209 05:51:32.349392 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:32.349451 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:32.352963 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:32.353075 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:32.392087 1795150 cri.go:89] found id: ""
	I1209 05:51:32.392112 1795150 logs.go:282] 0 containers: []
	W1209 05:51:32.392121 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:32.392128 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:32.392252 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:32.431891 1795150 cri.go:89] found id: ""
	I1209 05:51:32.431917 1795150 logs.go:282] 0 containers: []
	W1209 05:51:32.431926 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:32.431940 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:32.431951 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:32.479916 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:32.479949 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:32.531767 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:32.531804 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:32.570670 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:32.570696 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:32.614919 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:32.614951 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:32.683804 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:32.683844 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:32.811794 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:32.811834 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:32.831747 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:32.831775 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:32.912834 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:32.912858 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:32.912874 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:32.999315 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:32.999353 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:35.557870 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:35.558378 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:35.558434 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:35.558499 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:35.598615 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:35.598689 1795150 cri.go:89] found id: ""
	I1209 05:51:35.598703 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:35.598762 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:35.602507 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:35.602647 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:35.640455 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:35.640475 1795150 cri.go:89] found id: ""
	I1209 05:51:35.640484 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:35.640543 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:35.644358 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:35.644434 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:35.681969 1795150 cri.go:89] found id: ""
	I1209 05:51:35.681993 1795150 logs.go:282] 0 containers: []
	W1209 05:51:35.682006 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:35.682013 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:35.682075 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:35.720970 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:35.720995 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:35.721001 1795150 cri.go:89] found id: ""
	I1209 05:51:35.721009 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:35.721071 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:35.724908 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:35.728568 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:35.728666 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:35.766876 1795150 cri.go:89] found id: ""
	I1209 05:51:35.766946 1795150 logs.go:282] 0 containers: []
	W1209 05:51:35.766961 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:35.766968 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:35.767036 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:35.805310 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:35.805371 1795150 cri.go:89] found id: ""
	I1209 05:51:35.805393 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:35.805461 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:35.809087 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:35.809168 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:35.869999 1795150 cri.go:89] found id: ""
	I1209 05:51:35.870029 1795150 logs.go:282] 0 containers: []
	W1209 05:51:35.870038 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:35.870045 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:35.870109 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:35.917568 1795150 cri.go:89] found id: ""
	I1209 05:51:35.917652 1795150 logs.go:282] 0 containers: []
	W1209 05:51:35.917667 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:35.917685 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:35.917697 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:35.935656 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:35.935685 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:35.984819 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:35.984851 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:36.070082 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:36.070119 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:36.109046 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:36.109118 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:36.174937 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:36.174974 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:36.219642 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:36.219673 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:36.337280 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:36.337317 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:36.408732 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:36.408753 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:36.408765 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:36.452435 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:36.452466 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:38.990257 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:38.990745 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:38.990796 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:38.990855 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:39.029364 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:39.029388 1795150 cri.go:89] found id: ""
	I1209 05:51:39.029397 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:39.029456 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:39.033003 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:39.033078 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:39.070041 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:39.070060 1795150 cri.go:89] found id: ""
	I1209 05:51:39.070067 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:39.070123 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:39.073851 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:39.073927 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:39.111584 1795150 cri.go:89] found id: ""
	I1209 05:51:39.111607 1795150 logs.go:282] 0 containers: []
	W1209 05:51:39.111615 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:39.111621 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:39.111683 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:39.150335 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:39.150355 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:39.150360 1795150 cri.go:89] found id: ""
	I1209 05:51:39.150368 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:39.150425 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:39.154143 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:39.157578 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:39.157664 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:39.195964 1795150 cri.go:89] found id: ""
	I1209 05:51:39.195991 1795150 logs.go:282] 0 containers: []
	W1209 05:51:39.196001 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:39.196007 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:39.196070 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:39.236306 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:39.236335 1795150 cri.go:89] found id: ""
	I1209 05:51:39.236348 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:39.236472 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:39.240388 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:39.240507 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:39.286022 1795150 cri.go:89] found id: ""
	I1209 05:51:39.286046 1795150 logs.go:282] 0 containers: []
	W1209 05:51:39.286054 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:39.286060 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:39.286121 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:39.326805 1795150 cri.go:89] found id: ""
	I1209 05:51:39.326829 1795150 logs.go:282] 0 containers: []
	W1209 05:51:39.326838 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:39.326892 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:39.326915 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:39.344800 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:39.344829 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:39.416058 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:39.416081 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:39.416095 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:39.460010 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:39.460043 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:39.504293 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:39.504331 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:39.540960 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:39.540994 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:39.610517 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:39.610558 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:39.664248 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:39.664278 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:39.787625 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:39.787662 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:39.870205 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:39.870242 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:42.411640 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:42.412113 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:51:42.412163 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:42.412227 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:42.451006 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:42.451026 1795150 cri.go:89] found id: ""
	I1209 05:51:42.451035 1795150 logs.go:282] 1 containers: [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:42.451095 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:42.454645 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:42.454724 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:42.492870 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:42.492892 1795150 cri.go:89] found id: ""
	I1209 05:51:42.492901 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:42.492981 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:42.496635 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:42.496711 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:42.533996 1795150 cri.go:89] found id: ""
	I1209 05:51:42.534020 1795150 logs.go:282] 0 containers: []
	W1209 05:51:42.534029 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:42.534035 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:42.534117 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:42.571329 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:42.571350 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:42.571355 1795150 cri.go:89] found id: ""
	I1209 05:51:42.571363 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:42.571421 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:42.576898 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:42.580895 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:42.580969 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:42.634147 1795150 cri.go:89] found id: ""
	I1209 05:51:42.634169 1795150 logs.go:282] 0 containers: []
	W1209 05:51:42.634178 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:42.634184 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:42.634242 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:42.673348 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:42.673370 1795150 cri.go:89] found id: ""
	I1209 05:51:42.673378 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:42.673459 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:42.677235 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:42.677308 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:42.715713 1795150 cri.go:89] found id: ""
	I1209 05:51:42.715738 1795150 logs.go:282] 0 containers: []
	W1209 05:51:42.715747 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:42.715753 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:42.715836 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:42.755033 1795150 cri.go:89] found id: ""
	I1209 05:51:42.755057 1795150 logs.go:282] 0 containers: []
	W1209 05:51:42.755066 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:42.755080 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:51:42.755091 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:51:42.822529 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:51:42.822579 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:51:42.865914 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:42.865943 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:42.992289 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:51:42.992333 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:43.050012 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:43.050048 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:43.069905 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:43.069995 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:51:43.144694 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:51:43.144729 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:51:43.144743 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:43.199911 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:51:43.199943 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:43.295773 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:51:43.295807 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:43.389209 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:51:43.389239 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:45.955228 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:51:50.955615 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 05:51:50.955709 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:51:50.955777 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:51:50.998412 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:51:50.998438 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:51:50.998446 1795150 cri.go:89] found id: ""
	I1209 05:51:50.998453 1795150 logs.go:282] 2 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:51:50.998510 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:51.002131 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:51.005985 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:51:51.006071 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:51:51.054159 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:51:51.054183 1795150 cri.go:89] found id: ""
	I1209 05:51:51.054192 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:51:51.054254 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:51.058207 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:51:51.058289 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:51:51.100042 1795150 cri.go:89] found id: ""
	I1209 05:51:51.100068 1795150 logs.go:282] 0 containers: []
	W1209 05:51:51.100077 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:51:51.100083 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:51:51.100144 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:51:51.139902 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:51:51.139926 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:51:51.139931 1795150 cri.go:89] found id: ""
	I1209 05:51:51.139939 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:51:51.140001 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:51.143885 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:51.147376 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:51:51.147453 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:51:51.185547 1795150 cri.go:89] found id: ""
	I1209 05:51:51.185570 1795150 logs.go:282] 0 containers: []
	W1209 05:51:51.185579 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:51:51.185585 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:51:51.185681 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:51:51.223304 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:51:51.223327 1795150 cri.go:89] found id: ""
	I1209 05:51:51.223336 1795150 logs.go:282] 1 containers: [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:51:51.223409 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:51:51.227094 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:51:51.227175 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:51:51.266329 1795150 cri.go:89] found id: ""
	I1209 05:51:51.266352 1795150 logs.go:282] 0 containers: []
	W1209 05:51:51.266360 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:51:51.266366 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:51:51.266427 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:51:51.304726 1795150 cri.go:89] found id: ""
	I1209 05:51:51.304759 1795150 logs.go:282] 0 containers: []
	W1209 05:51:51.304768 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:51:51.304778 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:51:51.304790 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:51:51.425460 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:51:51.425496 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:51:51.445878 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:51:51.445907 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 05:52:01.519087 1795150 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073153474s)
	W1209 05:52:01.519140 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1209 05:52:01.519148 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:01.519159 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:01.561454 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:52:01.561491 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:52:01.609246 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:01.609278 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:01.712697 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:01.712786 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:01.752361 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:01.752443 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:01.795456 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:01.795540 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:01.847231 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:52:01.847266 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:01.886188 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:01.886227 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:04.454232 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:04.454866 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:04.454984 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:04.455076 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:04.522894 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:04.522986 1795150 cri.go:89] found id: "1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	I1209 05:52:04.523006 1795150 cri.go:89] found id: ""
	I1209 05:52:04.523070 1795150 logs.go:282] 2 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]
	I1209 05:52:04.523197 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.531166 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.536799 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:04.536998 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:04.588163 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:04.588243 1795150 cri.go:89] found id: ""
	I1209 05:52:04.588283 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:04.588382 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.593644 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:04.593815 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:04.666091 1795150 cri.go:89] found id: ""
	I1209 05:52:04.666183 1795150 logs.go:282] 0 containers: []
	W1209 05:52:04.666206 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:04.666246 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:04.666353 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:04.738499 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:04.738598 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:04.738625 1795150 cri.go:89] found id: ""
	I1209 05:52:04.738677 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:04.738799 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.744197 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.753554 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:04.753643 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:04.801405 1795150 cri.go:89] found id: ""
	I1209 05:52:04.801428 1795150 logs.go:282] 0 containers: []
	W1209 05:52:04.801437 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:04.801443 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:04.801504 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:04.856310 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:04.856342 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:04.856349 1795150 cri.go:89] found id: ""
	I1209 05:52:04.856357 1795150 logs.go:282] 2 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:52:04.856426 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.860499 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:04.864669 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:04.864783 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:04.909929 1795150 cri.go:89] found id: ""
	I1209 05:52:04.909952 1795150 logs.go:282] 0 containers: []
	W1209 05:52:04.909961 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:04.909968 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:04.910042 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:04.981334 1795150 cri.go:89] found id: ""
	I1209 05:52:04.981359 1795150 logs.go:282] 0 containers: []
	W1209 05:52:04.981367 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:04.981377 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:04.981389 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:05.080066 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:05.080089 1795150 logs.go:123] Gathering logs for kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff] ...
	I1209 05:52:05.080105 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	W1209 05:52:05.165388 1795150 logs.go:130] failed kube-apiserver [1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff": Process exited with status 1
	stdout:
	
	stderr:
	E1209 05:52:05.151123    5628 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff\": container with ID starting with 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff not found: ID does not exist" containerID="1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	time="2025-12-09T05:52:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff\": container with ID starting with 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff not found: ID does not exist"
	 output: 
	** stderr ** 
	E1209 05:52:05.151123    5628 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff\": container with ID starting with 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff not found: ID does not exist" containerID="1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff"
	time="2025-12-09T05:52:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff\": container with ID starting with 1c1b99113fae0c4710b0ea47be7c65f5fc65b7bb32b3a1f6d08f96b880502fff not found: ID does not exist"
	
	** /stderr **
	I1209 05:52:05.165413 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:05.165428 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:05.295400 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:52:05.295476 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:05.342370 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:05.342464 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:05.420755 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:05.420837 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:05.441220 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:05.441249 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:05.496415 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:05.496485 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:05.551689 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:05.551764 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:05.601573 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:05.601608 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:05.653138 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:05.653174 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:05.710700 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:05.710725 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:08.344896 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:08.345336 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:08.345384 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:08.345448 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:08.383190 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:08.383209 1795150 cri.go:89] found id: ""
	I1209 05:52:08.383217 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:08.383278 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:08.386779 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:08.386868 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:08.425302 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:08.425326 1795150 cri.go:89] found id: ""
	I1209 05:52:08.425334 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:08.425392 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:08.428818 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:08.428902 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:08.465783 1795150 cri.go:89] found id: ""
	I1209 05:52:08.465810 1795150 logs.go:282] 0 containers: []
	W1209 05:52:08.465819 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:08.465825 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:08.465890 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:08.503648 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:08.503671 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:08.503676 1795150 cri.go:89] found id: ""
	I1209 05:52:08.503683 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:08.503747 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:08.507775 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:08.514865 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:08.514947 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:08.552207 1795150 cri.go:89] found id: ""
	I1209 05:52:08.552230 1795150 logs.go:282] 0 containers: []
	W1209 05:52:08.552241 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:08.552247 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:08.552307 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:08.589468 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:08.589491 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:08.589497 1795150 cri.go:89] found id: ""
	I1209 05:52:08.589504 1795150 logs.go:282] 2 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:52:08.589579 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:08.593184 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:08.596439 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:08.596521 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:08.635671 1795150 cri.go:89] found id: ""
	I1209 05:52:08.635698 1795150 logs.go:282] 0 containers: []
	W1209 05:52:08.635707 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:08.635713 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:08.635771 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:08.671278 1795150 cri.go:89] found id: ""
	I1209 05:52:08.671307 1795150 logs.go:282] 0 containers: []
	W1209 05:52:08.671317 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:08.671326 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:08.671337 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:08.739617 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:08.739655 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:08.873494 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:08.873584 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:08.941260 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:08.941281 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:08.941294 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:08.985990 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:08.986020 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:09.094789 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:52:09.094836 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:09.131662 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:09.131690 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:09.173102 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:09.173143 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:09.191014 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:09.191042 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:09.235006 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:09.235038 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:09.273083 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:09.273110 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:11.810368 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:11.810862 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:11.810911 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:11.810971 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:11.858481 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:11.858507 1795150 cri.go:89] found id: ""
	I1209 05:52:11.858515 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:11.858601 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:11.862795 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:11.862871 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:11.908373 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:11.908392 1795150 cri.go:89] found id: ""
	I1209 05:52:11.908400 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:11.908457 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:11.912117 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:11.912192 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:11.949988 1795150 cri.go:89] found id: ""
	I1209 05:52:11.950065 1795150 logs.go:282] 0 containers: []
	W1209 05:52:11.950087 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:11.950101 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:11.950171 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:11.989033 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:11.989058 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:11.989064 1795150 cri.go:89] found id: ""
	I1209 05:52:11.989072 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:11.989131 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:11.992768 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:11.996148 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:11.996232 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:12.039690 1795150 cri.go:89] found id: ""
	I1209 05:52:12.039713 1795150 logs.go:282] 0 containers: []
	W1209 05:52:12.039722 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:12.039728 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:12.039789 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:12.078754 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:12.078774 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:12.078778 1795150 cri.go:89] found id: ""
	I1209 05:52:12.078786 1795150 logs.go:282] 2 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:52:12.078841 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:12.082426 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:12.085758 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:12.085841 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:12.122543 1795150 cri.go:89] found id: ""
	I1209 05:52:12.122566 1795150 logs.go:282] 0 containers: []
	W1209 05:52:12.122720 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:12.122729 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:12.122798 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:12.163310 1795150 cri.go:89] found id: ""
	I1209 05:52:12.163334 1795150 logs.go:282] 0 containers: []
	W1209 05:52:12.163342 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:12.163352 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:12.163368 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:12.217977 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:12.218009 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:12.260133 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:52:12.260159 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:12.303651 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:12.303684 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:12.348128 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:12.348162 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:12.444730 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:12.444768 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:12.481947 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:12.481991 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:12.549254 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:12.549289 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:12.601359 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:12.601390 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:12.737596 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:12.737635 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:12.756091 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:12.756127 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:12.830271 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:15.330433 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:15.330897 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:15.330948 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:15.331000 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:15.368583 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:15.368608 1795150 cri.go:89] found id: ""
	I1209 05:52:15.368617 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:15.368681 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:15.372283 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:15.372354 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:15.409342 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:15.409364 1795150 cri.go:89] found id: ""
	I1209 05:52:15.409373 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:15.409426 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:15.412795 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:15.412862 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:15.449353 1795150 cri.go:89] found id: ""
	I1209 05:52:15.449379 1795150 logs.go:282] 0 containers: []
	W1209 05:52:15.449389 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:15.449395 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:15.449455 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:15.486483 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:15.486503 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:15.486507 1795150 cri.go:89] found id: ""
	I1209 05:52:15.486515 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:15.486593 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:15.490271 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:15.493920 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:15.493998 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:15.531124 1795150 cri.go:89] found id: ""
	I1209 05:52:15.531147 1795150 logs.go:282] 0 containers: []
	W1209 05:52:15.531156 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:15.531162 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:15.531218 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:15.574038 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:15.574061 1795150 cri.go:89] found id: "b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:15.574066 1795150 cri.go:89] found id: ""
	I1209 05:52:15.574074 1795150 logs.go:282] 2 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52]
	I1209 05:52:15.574125 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:15.578304 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:15.581902 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:15.581976 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:15.630166 1795150 cri.go:89] found id: ""
	I1209 05:52:15.630191 1795150 logs.go:282] 0 containers: []
	W1209 05:52:15.630205 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:15.630212 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:15.630268 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:15.668930 1795150 cri.go:89] found id: ""
	I1209 05:52:15.668953 1795150 logs.go:282] 0 containers: []
	W1209 05:52:15.668962 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:15.668971 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:15.668984 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:15.782081 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:15.782116 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:15.859064 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:15.859104 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:15.902211 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:15.902241 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:16.024114 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:16.024153 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:16.068713 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:16.068747 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:16.113883 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:16.113914 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:16.151252 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:16.151334 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:16.189011 1795150 logs.go:123] Gathering logs for kube-controller-manager [b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52] ...
	I1209 05:52:16.189039 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5f43826d251dd972e61a0c122220d40d74436789058eb722bb68108c859ad52"
	I1209 05:52:16.238526 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:16.238553 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:16.259707 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:16.259737 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:16.340410 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:18.841089 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:18.841620 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:18.841676 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:18.841741 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:18.883133 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:18.883154 1795150 cri.go:89] found id: ""
	I1209 05:52:18.883162 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:18.883225 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:18.886889 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:18.886973 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:18.925126 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:18.925149 1795150 cri.go:89] found id: ""
	I1209 05:52:18.925157 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:18.925216 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:18.928848 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:18.928923 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:18.966296 1795150 cri.go:89] found id: ""
	I1209 05:52:18.966322 1795150 logs.go:282] 0 containers: []
	W1209 05:52:18.966334 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:18.966341 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:18.966397 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:19.003098 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:19.003122 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:19.003127 1795150 cri.go:89] found id: ""
	I1209 05:52:19.003136 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:19.003192 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:19.008430 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:19.013415 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:19.013493 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:19.051996 1795150 cri.go:89] found id: ""
	I1209 05:52:19.052074 1795150 logs.go:282] 0 containers: []
	W1209 05:52:19.052090 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:19.052097 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:19.052162 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:19.094619 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:19.094640 1795150 cri.go:89] found id: ""
	I1209 05:52:19.094649 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:19.094712 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:19.098232 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:19.098306 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:19.141355 1795150 cri.go:89] found id: ""
	I1209 05:52:19.141431 1795150 logs.go:282] 0 containers: []
	W1209 05:52:19.141447 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:19.141455 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:19.141516 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:19.179270 1795150 cri.go:89] found id: ""
	I1209 05:52:19.179301 1795150 logs.go:282] 0 containers: []
	W1209 05:52:19.179310 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:19.179325 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:19.179336 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:19.219274 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:19.219303 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:19.257453 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:19.257481 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:19.330158 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:19.330199 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:19.396767 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:19.396796 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:19.415767 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:19.415797 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:19.487143 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:19.487164 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:19.487178 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:19.532287 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:19.532323 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:19.665324 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:19.665366 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:19.711769 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:19.711801 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:22.326638 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:22.327033 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:22.327073 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:22.327125 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:22.388649 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:22.388668 1795150 cri.go:89] found id: ""
	I1209 05:52:22.388676 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:22.388739 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:22.393521 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:22.393596 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:22.432200 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:22.432223 1795150 cri.go:89] found id: ""
	I1209 05:52:22.432233 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:22.432292 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:22.436316 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:22.436394 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:22.474792 1795150 cri.go:89] found id: ""
	I1209 05:52:22.474815 1795150 logs.go:282] 0 containers: []
	W1209 05:52:22.474824 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:22.474830 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:22.474894 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:22.513583 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:22.513610 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:22.513615 1795150 cri.go:89] found id: ""
	I1209 05:52:22.513623 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:22.513685 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:22.517504 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:22.521127 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:22.521204 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:22.558934 1795150 cri.go:89] found id: ""
	I1209 05:52:22.558959 1795150 logs.go:282] 0 containers: []
	W1209 05:52:22.558968 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:22.558974 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:22.559032 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:22.601243 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:22.601277 1795150 cri.go:89] found id: ""
	I1209 05:52:22.601286 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:22.601363 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:22.604978 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:22.605065 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:22.642408 1795150 cri.go:89] found id: ""
	I1209 05:52:22.642442 1795150 logs.go:282] 0 containers: []
	W1209 05:52:22.642452 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:22.642458 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:22.642519 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:22.681305 1795150 cri.go:89] found id: ""
	I1209 05:52:22.681331 1795150 logs.go:282] 0 containers: []
	W1209 05:52:22.681340 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:22.681357 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:22.681369 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:22.735607 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:22.735639 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:22.836749 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:22.836788 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:22.876835 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:22.876866 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:23.004725 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:23.004776 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:23.083776 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:23.083799 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:23.083812 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:23.140759 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:23.140835 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:23.183075 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:23.183108 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:23.251308 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:23.251346 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:23.298457 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:23.298487 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:25.816929 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:25.817413 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:25.817479 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:25.817538 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:25.855683 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:25.855706 1795150 cri.go:89] found id: ""
	I1209 05:52:25.855715 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:25.855779 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:25.859451 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:25.859528 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:25.898166 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:25.898191 1795150 cri.go:89] found id: ""
	I1209 05:52:25.898199 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:25.898255 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:25.902070 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:25.902144 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:25.940316 1795150 cri.go:89] found id: ""
	I1209 05:52:25.940340 1795150 logs.go:282] 0 containers: []
	W1209 05:52:25.940349 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:25.940355 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:25.940419 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:25.985090 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:25.985111 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:25.985115 1795150 cri.go:89] found id: ""
	I1209 05:52:25.985123 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:25.985182 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:25.988838 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:25.992762 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:25.992884 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:26.033302 1795150 cri.go:89] found id: ""
	I1209 05:52:26.033328 1795150 logs.go:282] 0 containers: []
	W1209 05:52:26.033381 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:26.033395 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:26.033481 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:26.073075 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:26.073111 1795150 cri.go:89] found id: ""
	I1209 05:52:26.073120 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:26.073191 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:26.077650 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:26.077735 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:26.130630 1795150 cri.go:89] found id: ""
	I1209 05:52:26.130655 1795150 logs.go:282] 0 containers: []
	W1209 05:52:26.130664 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:26.130671 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:26.130734 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:26.176931 1795150 cri.go:89] found id: ""
	I1209 05:52:26.176958 1795150 logs.go:282] 0 containers: []
	W1209 05:52:26.176967 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:26.176982 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:26.176994 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:26.318910 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:26.318955 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:26.337002 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:26.337032 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:26.413829 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:26.413852 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:26.413867 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:26.455451 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:26.455487 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:26.494426 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:26.494509 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:26.535087 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:26.535116 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:26.584254 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:26.584285 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:26.672516 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:26.672554 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:26.715489 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:26.715521 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:29.287783 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:29.288269 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:29.288324 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:29.288384 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:29.331224 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:29.331245 1795150 cri.go:89] found id: ""
	I1209 05:52:29.331253 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:29.331309 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:29.335248 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:29.335321 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:29.376465 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:29.376488 1795150 cri.go:89] found id: ""
	I1209 05:52:29.376496 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:29.376550 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:29.380119 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:29.380193 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:29.417237 1795150 cri.go:89] found id: ""
	I1209 05:52:29.417269 1795150 logs.go:282] 0 containers: []
	W1209 05:52:29.417278 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:29.417285 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:29.417344 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:29.453813 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:29.453832 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:29.453838 1795150 cri.go:89] found id: ""
	I1209 05:52:29.453845 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:29.453905 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:29.457556 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:29.460868 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:29.460942 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:29.499076 1795150 cri.go:89] found id: ""
	I1209 05:52:29.499103 1795150 logs.go:282] 0 containers: []
	W1209 05:52:29.499112 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:29.499119 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:29.499180 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:29.539414 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:29.539437 1795150 cri.go:89] found id: ""
	I1209 05:52:29.539446 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:29.539508 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:29.543273 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:29.543351 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:29.580492 1795150 cri.go:89] found id: ""
	I1209 05:52:29.580516 1795150 logs.go:282] 0 containers: []
	W1209 05:52:29.580525 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:29.580531 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:29.580596 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:29.622557 1795150 cri.go:89] found id: ""
	I1209 05:52:29.622603 1795150 logs.go:282] 0 containers: []
	W1209 05:52:29.622612 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:29.622632 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:29.622643 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:29.693994 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:29.694031 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:29.827039 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:29.827076 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:29.845863 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:29.845892 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:29.901228 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:29.901258 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:29.952737 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:29.952772 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:30.069974 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:30.070015 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:30.118949 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:30.118986 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:30.197938 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:30.197958 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:30.197975 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:30.236098 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:30.236127 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:32.781010 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:32.781751 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:32.781837 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:32.781913 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:32.823675 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:32.823698 1795150 cri.go:89] found id: ""
	I1209 05:52:32.823707 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:32.823771 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:32.828101 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:32.828189 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:32.885876 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:32.885899 1795150 cri.go:89] found id: ""
	I1209 05:52:32.885921 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:32.885979 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:32.889923 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:32.890003 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:32.930165 1795150 cri.go:89] found id: ""
	I1209 05:52:32.930191 1795150 logs.go:282] 0 containers: []
	W1209 05:52:32.930200 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:32.930207 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:32.930269 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:32.967444 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:32.967467 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:32.967472 1795150 cri.go:89] found id: ""
	I1209 05:52:32.967481 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:32.967545 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:32.971194 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:32.974613 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:32.974696 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:33.019876 1795150 cri.go:89] found id: ""
	I1209 05:52:33.019900 1795150 logs.go:282] 0 containers: []
	W1209 05:52:33.019908 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:33.019916 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:33.019987 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:33.059411 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:33.059434 1795150 cri.go:89] found id: ""
	I1209 05:52:33.059443 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:33.059501 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:33.063131 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:33.063203 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:33.108281 1795150 cri.go:89] found id: ""
	I1209 05:52:33.108308 1795150 logs.go:282] 0 containers: []
	W1209 05:52:33.108317 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:33.108324 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:33.108383 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:33.146613 1795150 cri.go:89] found id: ""
	I1209 05:52:33.146687 1795150 logs.go:282] 0 containers: []
	W1209 05:52:33.146708 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:33.146737 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:33.146781 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:33.276679 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:33.276715 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:33.362732 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:33.362757 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:33.362769 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:33.404646 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:33.404678 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:33.443793 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:33.443825 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:33.462046 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:33.462075 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:33.513193 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:33.513225 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:33.602873 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:33.602955 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:33.645994 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:33.646021 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:33.714744 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:33.714780 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:36.257513 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:36.258039 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:36.258098 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:36.258167 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:36.305852 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:36.305875 1795150 cri.go:89] found id: ""
	I1209 05:52:36.305884 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:36.305942 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:36.309577 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:36.309657 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:36.348190 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:36.348214 1795150 cri.go:89] found id: ""
	I1209 05:52:36.348223 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:36.348280 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:36.351761 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:36.351836 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:36.387790 1795150 cri.go:89] found id: ""
	I1209 05:52:36.387825 1795150 logs.go:282] 0 containers: []
	W1209 05:52:36.387834 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:36.387841 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:36.387898 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:36.427209 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:36.427230 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:36.427236 1795150 cri.go:89] found id: ""
	I1209 05:52:36.427243 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:36.427297 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:36.430824 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:36.434162 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:36.434232 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:36.474862 1795150 cri.go:89] found id: ""
	I1209 05:52:36.474889 1795150 logs.go:282] 0 containers: []
	W1209 05:52:36.474904 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:36.474911 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:36.474969 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:36.512422 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:36.512446 1795150 cri.go:89] found id: ""
	I1209 05:52:36.512455 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:36.512535 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:36.516229 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:36.516303 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:36.554902 1795150 cri.go:89] found id: ""
	I1209 05:52:36.554929 1795150 logs.go:282] 0 containers: []
	W1209 05:52:36.554937 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:36.554944 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:36.555005 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:36.607400 1795150 cri.go:89] found id: ""
	I1209 05:52:36.607428 1795150 logs.go:282] 0 containers: []
	W1209 05:52:36.607437 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:36.607455 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:36.607469 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:36.709256 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:36.709292 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:36.844313 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:36.844353 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:36.889379 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:36.889407 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:36.930664 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:36.930698 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:36.967768 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:36.967793 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:37.037197 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:37.037241 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:37.078975 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:37.079005 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:37.097111 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:37.097142 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:37.172901 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:37.172921 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:37.172933 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:39.718658 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:39.719112 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:39.719175 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:39.719236 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:39.759949 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:39.759972 1795150 cri.go:89] found id: ""
	I1209 05:52:39.759981 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:39.760040 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:39.763554 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:39.763627 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:39.799861 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:39.799882 1795150 cri.go:89] found id: ""
	I1209 05:52:39.799891 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:39.799945 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:39.803451 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:39.803532 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:39.841602 1795150 cri.go:89] found id: ""
	I1209 05:52:39.841650 1795150 logs.go:282] 0 containers: []
	W1209 05:52:39.841659 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:39.841666 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:39.841733 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:39.878400 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:39.878421 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:39.878427 1795150 cri.go:89] found id: ""
	I1209 05:52:39.878435 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:39.878491 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:39.882136 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:39.885455 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:39.885529 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:39.921999 1795150 cri.go:89] found id: ""
	I1209 05:52:39.922024 1795150 logs.go:282] 0 containers: []
	W1209 05:52:39.922033 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:39.922040 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:39.922097 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:39.960018 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:39.960040 1795150 cri.go:89] found id: ""
	I1209 05:52:39.960049 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:39.960117 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:39.963600 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:39.963669 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:40.001624 1795150 cri.go:89] found id: ""
	I1209 05:52:40.001648 1795150 logs.go:282] 0 containers: []
	W1209 05:52:40.001658 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:40.001665 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:40.001723 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:40.053572 1795150 cri.go:89] found id: ""
	I1209 05:52:40.053601 1795150 logs.go:282] 0 containers: []
	W1209 05:52:40.053644 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:40.053663 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:40.053680 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:40.100624 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:40.100654 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:40.144787 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:40.144819 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:40.248840 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:40.248877 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:40.291997 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:40.292028 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:40.362125 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:40.362165 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:40.503348 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:40.503398 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:40.521935 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:40.521970 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:40.604208 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:40.604271 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:40.604290 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:40.649777 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:40.649811 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:43.189126 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:43.189637 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:43.189691 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:43.189760 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:43.228178 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:43.228200 1795150 cri.go:89] found id: ""
	I1209 05:52:43.228209 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:43.228265 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:43.232023 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:43.232099 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:43.286990 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:43.287023 1795150 cri.go:89] found id: ""
	I1209 05:52:43.287032 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:43.287090 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:43.291087 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:43.291162 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:43.345701 1795150 cri.go:89] found id: ""
	I1209 05:52:43.345730 1795150 logs.go:282] 0 containers: []
	W1209 05:52:43.345745 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:43.345751 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:43.345812 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:43.407620 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:43.407643 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:43.407648 1795150 cri.go:89] found id: ""
	I1209 05:52:43.407656 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:43.407712 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:43.411552 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:43.415454 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:43.415557 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:43.455694 1795150 cri.go:89] found id: ""
	I1209 05:52:43.455722 1795150 logs.go:282] 0 containers: []
	W1209 05:52:43.455731 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:43.455738 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:43.455802 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:43.500173 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:43.500195 1795150 cri.go:89] found id: ""
	I1209 05:52:43.500204 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:43.500263 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:43.504002 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:43.504074 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:43.542768 1795150 cri.go:89] found id: ""
	I1209 05:52:43.542850 1795150 logs.go:282] 0 containers: []
	W1209 05:52:43.542898 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:43.542922 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:43.543021 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:43.584344 1795150 cri.go:89] found id: ""
	I1209 05:52:43.584368 1795150 logs.go:282] 0 containers: []
	W1209 05:52:43.584377 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:43.584430 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:43.584448 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:43.632652 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:43.632684 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:43.726112 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:43.726148 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:43.800579 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:43.800616 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:43.934786 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:43.934821 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:43.952777 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:43.952808 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:44.025433 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:44.025453 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:44.025469 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:44.069837 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:44.069868 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:44.134256 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:44.134334 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:44.179710 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:44.179738 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:46.733615 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:46.734080 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:46.734130 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:46.734188 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:46.770015 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:46.770033 1795150 cri.go:89] found id: ""
	I1209 05:52:46.770041 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:46.770097 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:46.773602 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:46.773680 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:46.810372 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:46.810393 1795150 cri.go:89] found id: ""
	I1209 05:52:46.810401 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:46.810456 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:46.814021 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:46.814143 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:46.855664 1795150 cri.go:89] found id: ""
	I1209 05:52:46.855732 1795150 logs.go:282] 0 containers: []
	W1209 05:52:46.855755 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:46.855774 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:46.855861 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:46.893422 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:46.893456 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:46.893461 1795150 cri.go:89] found id: ""
	I1209 05:52:46.893469 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:46.893524 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:46.897148 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:46.900524 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:46.900620 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:46.938421 1795150 cri.go:89] found id: ""
	I1209 05:52:46.938444 1795150 logs.go:282] 0 containers: []
	W1209 05:52:46.938454 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:46.938460 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:46.938519 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:46.975832 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:46.975897 1795150 cri.go:89] found id: ""
	I1209 05:52:46.975911 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:46.975979 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:46.979495 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:46.979566 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:47.017982 1795150 cri.go:89] found id: ""
	I1209 05:52:47.018010 1795150 logs.go:282] 0 containers: []
	W1209 05:52:47.018020 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:47.018027 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:47.018090 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:47.064139 1795150 cri.go:89] found id: ""
	I1209 05:52:47.064165 1795150 logs.go:282] 0 containers: []
	W1209 05:52:47.064174 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:47.064190 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:47.064201 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:47.207171 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:47.207206 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:47.283063 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:47.283087 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:47.283102 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:47.321032 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:47.321066 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:47.395939 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:47.395981 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:47.440570 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:47.440603 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:47.459127 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:47.459155 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:47.509355 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:47.509387 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:47.564310 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:47.564342 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:47.659292 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:47.659331 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:50.195317 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:50.195779 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:50.195854 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:50.195934 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:50.238809 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:50.238833 1795150 cri.go:89] found id: ""
	I1209 05:52:50.238842 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:50.238902 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:50.242605 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:50.242683 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:50.284102 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:50.284124 1795150 cri.go:89] found id: ""
	I1209 05:52:50.284132 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:50.284195 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:50.287680 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:50.287763 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:50.323723 1795150 cri.go:89] found id: ""
	I1209 05:52:50.323757 1795150 logs.go:282] 0 containers: []
	W1209 05:52:50.323766 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:50.323773 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:50.323865 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:50.363127 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:50.363150 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:50.363157 1795150 cri.go:89] found id: ""
	I1209 05:52:50.363176 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:50.363269 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:50.367057 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:50.370800 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:50.370873 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:50.407358 1795150 cri.go:89] found id: ""
	I1209 05:52:50.407383 1795150 logs.go:282] 0 containers: []
	W1209 05:52:50.407392 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:50.407399 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:50.407461 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:50.446928 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:50.446961 1795150 cri.go:89] found id: ""
	I1209 05:52:50.446971 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:50.447030 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:50.450624 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:50.450726 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:50.491699 1795150 cri.go:89] found id: ""
	I1209 05:52:50.491724 1795150 logs.go:282] 0 containers: []
	W1209 05:52:50.491733 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:50.491739 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:50.491828 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:50.531349 1795150 cri.go:89] found id: ""
	I1209 05:52:50.531379 1795150 logs.go:282] 0 containers: []
	W1209 05:52:50.531388 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:50.531402 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:50.531414 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:50.570924 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:50.570953 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:50.639936 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:50.639975 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:50.684614 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:50.684683 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:50.815987 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:50.816026 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:50.843383 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:50.843410 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:50.895682 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:50.895755 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:50.949269 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:50.949303 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:51.058861 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:51.058899 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:51.129310 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:51.129375 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:51.129402 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:53.666491 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:53.667007 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:53.667081 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:53.667164 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:53.704865 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:53.704886 1795150 cri.go:89] found id: ""
	I1209 05:52:53.704894 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:53.704972 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:53.708598 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:53.708680 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:53.747227 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:53.747258 1795150 cri.go:89] found id: ""
	I1209 05:52:53.747267 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:53.747350 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:53.751144 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:53.751245 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:53.790686 1795150 cri.go:89] found id: ""
	I1209 05:52:53.790708 1795150 logs.go:282] 0 containers: []
	W1209 05:52:53.790716 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:53.790723 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:53.790782 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:53.837270 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:53.837293 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:53.837298 1795150 cri.go:89] found id: ""
	I1209 05:52:53.837306 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:53.837363 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:53.843614 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:53.850052 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:53.850124 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:53.898444 1795150 cri.go:89] found id: ""
	I1209 05:52:53.898469 1795150 logs.go:282] 0 containers: []
	W1209 05:52:53.898478 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:53.898488 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:53.898552 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:53.936338 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:53.936361 1795150 cri.go:89] found id: ""
	I1209 05:52:53.936369 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:53.936431 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:53.940157 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:53.940246 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:53.980370 1795150 cri.go:89] found id: ""
	I1209 05:52:53.980437 1795150 logs.go:282] 0 containers: []
	W1209 05:52:53.980460 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:53.980478 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:53.980548 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:54.027076 1795150 cri.go:89] found id: ""
	I1209 05:52:54.027154 1795150 logs.go:282] 0 containers: []
	W1209 05:52:54.027177 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:54.027205 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:54.027223 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:54.045247 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:54.045280 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:54.086793 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:54.086825 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:54.132465 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:54.132497 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:54.168187 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:54.168218 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:54.208167 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:54.208194 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:52:54.278949 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:54.278988 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:54.321608 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:54.321642 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:54.446223 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:54.446262 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:54.516889 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:54.516910 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:54.516923 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:57.111663 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:52:57.112100 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:52:57.112157 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:52:57.112213 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:52:57.151875 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:57.151898 1795150 cri.go:89] found id: ""
	I1209 05:52:57.151908 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:52:57.151967 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:57.155572 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:52:57.155647 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:52:57.199469 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:57.199493 1795150 cri.go:89] found id: ""
	I1209 05:52:57.199502 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:52:57.199560 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:57.203353 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:52:57.203469 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:52:57.242643 1795150 cri.go:89] found id: ""
	I1209 05:52:57.242669 1795150 logs.go:282] 0 containers: []
	W1209 05:52:57.242678 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:52:57.242684 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:52:57.242743 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:52:57.287280 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:57.287304 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:57.287374 1795150 cri.go:89] found id: ""
	I1209 05:52:57.287388 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:52:57.287467 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:57.291328 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:57.294791 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:52:57.294880 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:52:57.337155 1795150 cri.go:89] found id: ""
	I1209 05:52:57.337179 1795150 logs.go:282] 0 containers: []
	W1209 05:52:57.337188 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:52:57.337195 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:52:57.337254 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:52:57.375533 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:57.375569 1795150 cri.go:89] found id: ""
	I1209 05:52:57.375577 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:52:57.375639 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:52:57.379321 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:52:57.379397 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:52:57.417122 1795150 cri.go:89] found id: ""
	I1209 05:52:57.417150 1795150 logs.go:282] 0 containers: []
	W1209 05:52:57.417159 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:52:57.417166 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:52:57.417229 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:52:57.455120 1795150 cri.go:89] found id: ""
	I1209 05:52:57.455143 1795150 logs.go:282] 0 containers: []
	W1209 05:52:57.455151 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:52:57.455164 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:52:57.455176 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:52:57.525312 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:52:57.525331 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:52:57.525344 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:52:57.573062 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:52:57.573095 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:52:57.672108 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:52:57.672160 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:52:57.719589 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:52:57.719619 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:52:57.850746 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:52:57.850793 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:52:57.869763 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:52:57.869794 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:52:57.913699 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:52:57.913735 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:52:57.950477 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:52:57.950505 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:52:57.988711 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:52:57.988738 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:00.566875 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:53:00.567343 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:53:00.567408 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:00.567482 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:00.624672 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:00.624695 1795150 cri.go:89] found id: ""
	I1209 05:53:00.624705 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:53:00.624758 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:00.629047 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:00.629115 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:00.676342 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:00.676368 1795150 cri.go:89] found id: ""
	I1209 05:53:00.676378 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:53:00.676437 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:00.680234 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:00.680319 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:00.719496 1795150 cri.go:89] found id: ""
	I1209 05:53:00.719523 1795150 logs.go:282] 0 containers: []
	W1209 05:53:00.719532 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:53:00.719542 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:00.719599 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:00.757642 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:00.757664 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:00.757669 1795150 cri.go:89] found id: ""
	I1209 05:53:00.757676 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:53:00.757733 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:00.761182 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:00.764532 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:00.764607 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:00.803111 1795150 cri.go:89] found id: ""
	I1209 05:53:00.803136 1795150 logs.go:282] 0 containers: []
	W1209 05:53:00.803144 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:00.803151 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:00.803213 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:00.844553 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:00.844576 1795150 cri.go:89] found id: ""
	I1209 05:53:00.844593 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:53:00.844649 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:00.848506 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:00.848591 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:00.884717 1795150 cri.go:89] found id: ""
	I1209 05:53:00.884801 1795150 logs.go:282] 0 containers: []
	W1209 05:53:00.884824 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:00.884839 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:00.884914 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:00.923050 1795150 cri.go:89] found id: ""
	I1209 05:53:00.923076 1795150 logs.go:282] 0 containers: []
	W1209 05:53:00.923085 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:00.923100 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:53:00.923112 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:00.970864 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:00.970891 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:00.988816 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:53:00.988845 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:01.033149 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:53:01.033180 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:01.082791 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:01.082825 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:01.209631 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:01.209672 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:01.282439 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:01.282462 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:53:01.282475 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:01.400571 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:53:01.400602 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:01.438922 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:53:01.438949 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:01.478159 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:01.478186 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:04.047614 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:53:04.048093 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:53:04.048148 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:04.048216 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:04.087678 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:04.087753 1795150 cri.go:89] found id: ""
	I1209 05:53:04.087778 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:53:04.087873 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:04.091911 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:04.092007 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:04.130690 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:04.130765 1795150 cri.go:89] found id: ""
	I1209 05:53:04.130788 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:53:04.130876 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:04.134565 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:04.134658 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:04.171685 1795150 cri.go:89] found id: ""
	I1209 05:53:04.171712 1795150 logs.go:282] 0 containers: []
	W1209 05:53:04.171721 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:53:04.171728 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:04.171791 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:04.212277 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:04.212308 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:04.212314 1795150 cri.go:89] found id: ""
	I1209 05:53:04.212321 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:53:04.212384 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:04.216431 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:04.219846 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:04.219944 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:04.259540 1795150 cri.go:89] found id: ""
	I1209 05:53:04.259569 1795150 logs.go:282] 0 containers: []
	W1209 05:53:04.259578 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:04.259584 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:04.259644 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:04.303638 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:04.303660 1795150 cri.go:89] found id: ""
	I1209 05:53:04.303668 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:53:04.303726 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:04.307365 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:04.307484 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:04.354994 1795150 cri.go:89] found id: ""
	I1209 05:53:04.355063 1795150 logs.go:282] 0 containers: []
	W1209 05:53:04.355088 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:04.355109 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:04.355198 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:04.403154 1795150 cri.go:89] found id: ""
	I1209 05:53:04.403177 1795150 logs.go:282] 0 containers: []
	W1209 05:53:04.403186 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:04.403200 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:04.403211 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:04.526632 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:04.526669 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:04.612594 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:04.612617 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:53:04.612631 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:04.653414 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:53:04.653448 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:04.703967 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:53:04.703997 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:04.741940 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:04.741970 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:04.759954 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:53:04.759982 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:04.875967 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:53:04.876007 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:04.914648 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:04.914678 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:04.986269 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:53:04.986305 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:07.542501 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:53:07.542946 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:53:07.542987 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:07.543041 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:07.584510 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:07.584531 1795150 cri.go:89] found id: ""
	I1209 05:53:07.584540 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:53:07.584601 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:07.588545 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:07.588616 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:07.635725 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:07.635746 1795150 cri.go:89] found id: ""
	I1209 05:53:07.635755 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:53:07.635825 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:07.639336 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:07.639411 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:07.679160 1795150 cri.go:89] found id: ""
	I1209 05:53:07.679181 1795150 logs.go:282] 0 containers: []
	W1209 05:53:07.679189 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:53:07.679196 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:07.679253 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:07.721325 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:07.721344 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:07.721348 1795150 cri.go:89] found id: ""
	I1209 05:53:07.721356 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:53:07.721416 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:07.725030 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:07.728314 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:07.728393 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:07.771862 1795150 cri.go:89] found id: ""
	I1209 05:53:07.771885 1795150 logs.go:282] 0 containers: []
	W1209 05:53:07.771894 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:07.771900 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:07.771958 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:07.811491 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:07.811513 1795150 cri.go:89] found id: ""
	I1209 05:53:07.811522 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:53:07.811579 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:07.815450 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:07.815524 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:07.854605 1795150 cri.go:89] found id: ""
	I1209 05:53:07.854632 1795150 logs.go:282] 0 containers: []
	W1209 05:53:07.854641 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:07.854648 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:07.854711 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:07.893843 1795150 cri.go:89] found id: ""
	I1209 05:53:07.893870 1795150 logs.go:282] 0 containers: []
	W1209 05:53:07.893879 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:07.893894 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:53:07.893941 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:07.940785 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:53:07.940818 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:08.039183 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:08.039224 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:08.113940 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:53:08.113977 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:08.164676 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:08.164709 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:08.300500 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:08.300537 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:08.318777 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:08.318806 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:08.390205 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:08.390225 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:53:08.390238 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:08.435950 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:53:08.435983 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:08.474168 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:53:08.474201 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:11.016800 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:53:11.017338 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:53:11.017396 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:11.017458 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:11.061808 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:11.061885 1795150 cri.go:89] found id: ""
	I1209 05:53:11.061901 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:53:11.061963 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:11.065759 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:11.065891 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:11.115917 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:11.115940 1795150 cri.go:89] found id: ""
	I1209 05:53:11.115950 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:53:11.116025 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:11.120104 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:11.120218 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:11.165302 1795150 cri.go:89] found id: ""
	I1209 05:53:11.165326 1795150 logs.go:282] 0 containers: []
	W1209 05:53:11.165335 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:53:11.165342 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:11.165403 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:11.207499 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:11.207523 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:11.207529 1795150 cri.go:89] found id: ""
	I1209 05:53:11.207537 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:53:11.207608 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:11.211242 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:11.214976 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:11.215068 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:11.251548 1795150 cri.go:89] found id: ""
	I1209 05:53:11.251572 1795150 logs.go:282] 0 containers: []
	W1209 05:53:11.251580 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:11.251587 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:11.251648 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:11.293059 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:11.293082 1795150 cri.go:89] found id: ""
	I1209 05:53:11.293091 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:53:11.293145 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:11.296723 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:11.296799 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:11.334122 1795150 cri.go:89] found id: ""
	I1209 05:53:11.334147 1795150 logs.go:282] 0 containers: []
	W1209 05:53:11.334156 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:11.334162 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:11.334220 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:11.371249 1795150 cri.go:89] found id: ""
	I1209 05:53:11.371275 1795150 logs.go:282] 0 containers: []
	W1209 05:53:11.371285 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:11.371299 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:11.371332 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:11.498052 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:11.498086 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:11.516592 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:11.516676 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:11.603874 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:11.603903 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:53:11.603917 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:11.653648 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:53:11.653680 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:11.690465 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:53:11.690511 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:11.731567 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:53:11.731597 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:11.775217 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:53:11.775248 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:11.891887 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:53:11.891968 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:11.931978 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:11.932006 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:14.502381 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:53:14.502841 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:53:14.502897 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:14.502955 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:14.540900 1795150 cri.go:89] found id: "44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:14.540923 1795150 cri.go:89] found id: ""
	I1209 05:53:14.540932 1795150 logs.go:282] 1 containers: [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea]
	I1209 05:53:14.540993 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:14.544714 1795150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:14.544794 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:14.582820 1795150 cri.go:89] found id: "6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:14.582845 1795150 cri.go:89] found id: ""
	I1209 05:53:14.582854 1795150 logs.go:282] 1 containers: [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e]
	I1209 05:53:14.582912 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:14.586514 1795150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:14.586607 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:14.625450 1795150 cri.go:89] found id: ""
	I1209 05:53:14.625480 1795150 logs.go:282] 0 containers: []
	W1209 05:53:14.625495 1795150 logs.go:284] No container was found matching "coredns"
	I1209 05:53:14.625502 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:14.625590 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:14.668333 1795150 cri.go:89] found id: "2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:14.668394 1795150 cri.go:89] found id: "672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:14.668406 1795150 cri.go:89] found id: ""
	I1209 05:53:14.668414 1795150 logs.go:282] 2 containers: [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a]
	I1209 05:53:14.668475 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:14.672705 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:14.676389 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:14.676486 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:14.723383 1795150 cri.go:89] found id: ""
	I1209 05:53:14.723452 1795150 logs.go:282] 0 containers: []
	W1209 05:53:14.723469 1795150 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:14.723481 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:14.723551 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:14.761430 1795150 cri.go:89] found id: "2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:14.761495 1795150 cri.go:89] found id: ""
	I1209 05:53:14.761519 1795150 logs.go:282] 1 containers: [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30]
	I1209 05:53:14.761584 1795150 ssh_runner.go:195] Run: which crictl
	I1209 05:53:14.765161 1795150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:14.765233 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:14.806874 1795150 cri.go:89] found id: ""
	I1209 05:53:14.806900 1795150 logs.go:282] 0 containers: []
	W1209 05:53:14.806909 1795150 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:14.806915 1795150 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:14.806974 1795150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:14.862775 1795150 cri.go:89] found id: ""
	I1209 05:53:14.862797 1795150 logs.go:282] 0 containers: []
	W1209 05:53:14.862805 1795150 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:14.862821 1795150 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:14.862834 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:14.884721 1795150 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:14.884753 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:14.959627 1795150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:14.959649 1795150 logs.go:123] Gathering logs for etcd [6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e] ...
	I1209 05:53:14.959667 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f55c141f1e1778b54eb2e3ac2ffee091a228540cd44a992ff7345d0d55f462e"
	I1209 05:53:15.027264 1795150 logs.go:123] Gathering logs for kube-scheduler [2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16] ...
	I1209 05:53:15.027324 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f11c81273aae5274dc8e77581018af6414d6072c5c2fa12bde72a49efd96e16"
	I1209 05:53:15.126367 1795150 logs.go:123] Gathering logs for kube-scheduler [672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a] ...
	I1209 05:53:15.126409 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 672c13e5b856c64b3d513deacd68842d1b9d687726b4b406ae7a440faf723b3a"
	I1209 05:53:15.166527 1795150 logs.go:123] Gathering logs for kube-controller-manager [2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30] ...
	I1209 05:53:15.166561 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb55b89e1ebb2a9738b6a255fdd214da1d7daabf6852993c219ebea9a5b6e30"
	I1209 05:53:15.206662 1795150 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:15.206693 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:15.275327 1795150 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:15.275378 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 05:53:15.407595 1795150 logs.go:123] Gathering logs for kube-apiserver [44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea] ...
	I1209 05:53:15.407631 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44cad3381db295c86ad0e2279c93fc76dbf66f76db9e6c45fb73788a029080ea"
	I1209 05:53:15.456721 1795150 logs.go:123] Gathering logs for container status ...
	I1209 05:53:15.456760 1795150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:18.009495 1795150 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1209 05:53:18.010025 1795150 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1209 05:53:18.010087 1795150 kubeadm.go:602] duration metric: took 4m4.760382607s to restartPrimaryControlPlane
	W1209 05:53:18.010161 1795150 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 05:53:18.010230 1795150 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1209 05:53:18.802669 1795150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:53:18.814820 1795150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 05:53:18.824292 1795150 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1209 05:53:18.824356 1795150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 05:53:18.833706 1795150 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 05:53:18.833727 1795150 kubeadm.go:158] found existing configuration files:
	
	I1209 05:53:18.833805 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 05:53:18.842986 1795150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 05:53:18.843075 1795150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 05:53:18.852015 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 05:53:18.861045 1795150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 05:53:18.861112 1795150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 05:53:18.869987 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 05:53:18.879527 1795150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 05:53:18.879615 1795150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 05:53:18.888684 1795150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 05:53:18.898437 1795150 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 05:53:18.898509 1795150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 05:53:18.908507 1795150 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 05:53:18.954531 1795150 kubeadm.go:319] [init] Using Kubernetes version: v1.32.0
	I1209 05:53:18.954864 1795150 kubeadm.go:319] [preflight] Running pre-flight checks
	I1209 05:53:18.979328 1795150 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1209 05:53:18.979405 1795150 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1209 05:53:18.979447 1795150 kubeadm.go:319] OS: Linux
	I1209 05:53:18.979497 1795150 kubeadm.go:319] CGROUPS_CPU: enabled
	I1209 05:53:18.979550 1795150 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1209 05:53:18.979601 1795150 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1209 05:53:18.979653 1795150 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1209 05:53:18.979705 1795150 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1209 05:53:18.979763 1795150 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1209 05:53:18.979813 1795150 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1209 05:53:18.979865 1795150 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1209 05:53:18.979915 1795150 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1209 05:53:19.045156 1795150 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 05:53:19.045272 1795150 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 05:53:19.045368 1795150 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 05:53:19.054557 1795150 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 05:53:20.713448 1771230 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001358954s
	I1209 05:53:20.713486 1771230 kubeadm.go:319] 
	I1209 05:53:20.713551 1771230 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1209 05:53:20.713586 1771230 kubeadm.go:319] 	- The kubelet is not running
	I1209 05:53:20.713721 1771230 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1209 05:53:20.713742 1771230 kubeadm.go:319] 
	I1209 05:53:20.713842 1771230 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1209 05:53:20.713873 1771230 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1209 05:53:20.713903 1771230 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1209 05:53:20.713908 1771230 kubeadm.go:319] 
	I1209 05:53:20.718108 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1209 05:53:20.718533 1771230 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1209 05:53:20.718659 1771230 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 05:53:20.718904 1771230 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1209 05:53:20.718911 1771230 kubeadm.go:319] 
	I1209 05:53:20.718980 1771230 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1209 05:53:20.719035 1771230 kubeadm.go:403] duration metric: took 12m8.360561884s to StartCluster
	I1209 05:53:20.719083 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 05:53:20.719142 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 05:53:19.058864 1795150 out.go:252]   - Generating certificates and keys ...
	I1209 05:53:19.058963 1795150 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1209 05:53:19.059038 1795150 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1209 05:53:19.059119 1795150 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 05:53:19.059184 1795150 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1209 05:53:19.059258 1795150 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 05:53:19.059316 1795150 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1209 05:53:19.059383 1795150 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1209 05:53:19.059449 1795150 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1209 05:53:19.059526 1795150 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 05:53:19.059602 1795150 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 05:53:19.059644 1795150 kubeadm.go:319] [certs] Using the existing "sa" key
	I1209 05:53:19.059704 1795150 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 05:53:19.280881 1795150 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 05:53:19.493087 1795150 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 05:53:19.729975 1795150 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 05:53:19.967912 1795150 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 05:53:20.886967 1795150 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 05:53:20.887263 1795150 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 05:53:20.890428 1795150 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 05:53:20.759611 1771230 cri.go:89] found id: ""
	I1209 05:53:20.759633 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.759641 1771230 logs.go:284] No container was found matching "kube-apiserver"
	I1209 05:53:20.759647 1771230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 05:53:20.759709 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 05:53:20.790036 1771230 cri.go:89] found id: ""
	I1209 05:53:20.790058 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.790067 1771230 logs.go:284] No container was found matching "etcd"
	I1209 05:53:20.790073 1771230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 05:53:20.790133 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 05:53:20.841780 1771230 cri.go:89] found id: ""
	I1209 05:53:20.841803 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.841812 1771230 logs.go:284] No container was found matching "coredns"
	I1209 05:53:20.841817 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 05:53:20.841876 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 05:53:20.909264 1771230 cri.go:89] found id: ""
	I1209 05:53:20.909286 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.909295 1771230 logs.go:284] No container was found matching "kube-scheduler"
	I1209 05:53:20.909301 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 05:53:20.909362 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 05:53:20.945721 1771230 cri.go:89] found id: ""
	I1209 05:53:20.945744 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.945752 1771230 logs.go:284] No container was found matching "kube-proxy"
	I1209 05:53:20.945758 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 05:53:20.945818 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 05:53:20.991021 1771230 cri.go:89] found id: ""
	I1209 05:53:20.991043 1771230 logs.go:282] 0 containers: []
	W1209 05:53:20.991051 1771230 logs.go:284] No container was found matching "kube-controller-manager"
	I1209 05:53:20.991059 1771230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 05:53:20.991117 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 05:53:21.037379 1771230 cri.go:89] found id: ""
	I1209 05:53:21.037399 1771230 logs.go:282] 0 containers: []
	W1209 05:53:21.037407 1771230 logs.go:284] No container was found matching "kindnet"
	I1209 05:53:21.037413 1771230 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1209 05:53:21.037473 1771230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1209 05:53:21.068751 1771230 cri.go:89] found id: ""
	I1209 05:53:21.068772 1771230 logs.go:282] 0 containers: []
	W1209 05:53:21.068781 1771230 logs.go:284] No container was found matching "storage-provisioner"
	I1209 05:53:21.068790 1771230 logs.go:123] Gathering logs for dmesg ...
	I1209 05:53:21.068804 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 05:53:21.089143 1771230 logs.go:123] Gathering logs for describe nodes ...
	I1209 05:53:21.089212 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1209 05:53:21.183462 1771230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1209 05:53:21.183480 1771230 logs.go:123] Gathering logs for CRI-O ...
	I1209 05:53:21.183492 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 05:53:21.220772 1771230 logs.go:123] Gathering logs for container status ...
	I1209 05:53:21.220846 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 05:53:21.257659 1771230 logs.go:123] Gathering logs for kubelet ...
	I1209 05:53:21.257726 1771230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 05:53:21.331748 1771230 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1209 05:53:21.331853 1771230 out.go:285] * 
	W1209 05:53:21.332055 1771230 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:53:21.332108 1771230 out.go:285] * 
	W1209 05:53:21.334360 1771230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:53:21.340405 1771230 out.go:203] 
	W1209 05:53:21.344443 1771230 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001358954s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1209 05:53:21.344726 1771230 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1209 05:53:21.344801 1771230 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1209 05:53:21.349870 1771230 out.go:203] 
	I1209 05:53:20.893776 1795150 out.go:252]   - Booting up control plane ...
	I1209 05:53:20.893888 1795150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 05:53:20.894002 1795150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 05:53:20.894884 1795150 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 05:53:20.912164 1795150 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 05:53:20.924021 1795150 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 05:53:20.924084 1795150 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1209 05:53:21.039780 1795150 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 05:53:21.039908 1795150 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 05:53:22.041777 1795150 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001856272s
	I1209 05:53:22.041867 1795150 kubeadm.go:319] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367654937Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367690384Z" level=info msg="Starting seccomp notifier watcher"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367739426Z" level=info msg="Create NRI interface"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367858492Z" level=info msg="built-in NRI default validator is disabled"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.36788218Z" level=info msg="runtime interface created"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367895399Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367901848Z" level=info msg="runtime interface starting up..."
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367908798Z" level=info msg="starting plugins..."
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367920736Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 09 05:41:07 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:41:07.367983186Z" level=info msg="No systemd watchdog enabled"
	Dec 09 05:41:07 kubernetes-upgrade-054206 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.309131045Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a7e0e5b2-bbfa-45ce-930b-48380437080b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.311459125Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=abae90dd-51b1-4938-b7c7-42ddb4e68256 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.312404177Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4c8fbafd-d526-4f76-b66e-ed9cf60996cb name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.313280132Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=882e8d1f-5fec-43e1-bd05-e6b300c6939b name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.314040952Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=32239d84-791a-4e7a-9292-e115436b55fd name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.314756446Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c72abe74-0273-4d43-812a-e57219d86833 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:45:16 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:45:16.318758701Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=981675e2-4587-4f29-af76-390443f596b3 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.511818941Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=77d001af-7493-4d6f-aa1d-113a8f222444 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.514782622Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=8d59ddb6-ca87-48ac-a151-47b13420343c name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.515563086Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=0be193a1-6cec-4e11-a28f-824c1a9ff61a name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.51618422Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6dbc2e69-548e-435b-94e3-391efdef9de3 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.516751512Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2363efed-9d06-46b7-bb98-04263aba14b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.518726958Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=23881c44-19f1-4433-9bc1-e87524cc5aae name=/runtime.v1.ImageService/ImageStatus
	Dec 09 05:49:18 kubernetes-upgrade-054206 crio[614]: time="2025-12-09T05:49:18.522755701Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=38f2cef4-ff37-41bf-8f17-0de74c7758b8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 9 05:15] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:16] overlayfs: idmapped layers are currently not supported
	[ +51.869899] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:25] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:26] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:27] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:29] overlayfs: idmapped layers are currently not supported
	[ +17.202326] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:30] overlayfs: idmapped layers are currently not supported
	[ +45.070414] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:31] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:32] overlayfs: idmapped layers are currently not supported
	[ +26.464722] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:33] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:38] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:39] overlayfs: idmapped layers are currently not supported
	[  +3.009285] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:40] overlayfs: idmapped layers are currently not supported
	[ +36.331905] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 05:53:24 up 10:35,  0 user,  load average: 0.85, 1.25, 1.70
	Linux kubernetes-upgrade-054206 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 09 05:53:21 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:53:22 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 09 05:53:22 kubernetes-upgrade-054206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:53:22 kubernetes-upgrade-054206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:53:22 kubernetes-upgrade-054206 kubelet[12079]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 05:53:22 kubernetes-upgrade-054206 kubelet[12079]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 05:53:22 kubernetes-upgrade-054206 kubelet[12079]: E1209 05:53:22.496366   12079 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:53:22 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:53:22 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:53:23 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 09 05:53:23 kubernetes-upgrade-054206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:53:23 kubernetes-upgrade-054206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:53:23 kubernetes-upgrade-054206 kubelet[12088]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 05:53:23 kubernetes-upgrade-054206 kubelet[12088]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 05:53:23 kubernetes-upgrade-054206 kubelet[12088]: E1209 05:53:23.445357   12088 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:53:23 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:53:23 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 09 05:53:24 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 09 05:53:24 kubernetes-upgrade-054206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:53:24 kubernetes-upgrade-054206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 09 05:53:24 kubernetes-upgrade-054206 kubelet[12184]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 05:53:24 kubernetes-upgrade-054206 kubelet[12184]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 09 05:53:24 kubernetes-upgrade-054206 kubelet[12184]: E1209 05:53:24.209111   12184 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 09 05:53:24 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 09 05:53:24 kubernetes-upgrade-054206 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-054206 -n kubernetes-upgrade-054206
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-054206 -n kubernetes-upgrade-054206: exit status 2 (508.708699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-054206" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-054206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-054206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-054206: (2.818347694s)
--- FAIL: TestKubernetesUpgrade (784.17s)

                                                
                                    
x
+
TestPause/serial/Pause (8.57s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-360536 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-360536 --alsologtostderr -v=5: exit status 80 (2.144684846s)

                                                
                                                
-- stdout --
	* Pausing node pause-360536 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:55:32.218213 1813698 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:55:32.218429 1813698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:55:32.218453 1813698 out.go:374] Setting ErrFile to fd 2...
	I1209 05:55:32.218473 1813698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:55:32.218907 1813698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:55:32.219384 1813698 out.go:368] Setting JSON to false
	I1209 05:55:32.219430 1813698 mustload.go:66] Loading cluster: pause-360536
	I1209 05:55:32.220291 1813698 config.go:182] Loaded profile config "pause-360536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:55:32.221910 1813698 cli_runner.go:164] Run: docker container inspect pause-360536 --format={{.State.Status}}
	I1209 05:55:32.248759 1813698 host.go:66] Checking if "pause-360536" exists ...
	I1209 05:55:32.249087 1813698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:55:32.352942 1813698 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:55:32.343830228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:55:32.353851 1813698 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-360536 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1209 05:55:32.358881 1813698 out.go:179] * Pausing node pause-360536 ... 
	I1209 05:55:32.361959 1813698 host.go:66] Checking if "pause-360536" exists ...
	I1209 05:55:32.362296 1813698 ssh_runner.go:195] Run: systemctl --version
	I1209 05:55:32.362353 1813698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:32.383749 1813698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34521 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/pause-360536/id_rsa Username:docker}
	I1209 05:55:32.490105 1813698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:55:32.506041 1813698 pause.go:52] kubelet running: true
	I1209 05:55:32.506114 1813698 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 05:55:32.786447 1813698 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 05:55:32.786541 1813698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 05:55:32.902421 1813698 cri.go:89] found id: "712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95"
	I1209 05:55:32.902440 1813698 cri.go:89] found id: "9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6"
	I1209 05:55:32.902444 1813698 cri.go:89] found id: "e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76"
	I1209 05:55:32.902448 1813698 cri.go:89] found id: "3d49925b5eff2c3fa79ae102703805c6c5ecb00d2bf50c997425bb41b782fedf"
	I1209 05:55:32.902451 1813698 cri.go:89] found id: "900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1"
	I1209 05:55:32.902454 1813698 cri.go:89] found id: "99893bf59b7055a42e8a3f852c218a2a871130283bcb5b180e6d8773a8a89cff"
	I1209 05:55:32.902457 1813698 cri.go:89] found id: "2c3e701aebc8fd7c9cacf4d34b3b6b1278c4671dcdaaffb6a19b9c2a9760602f"
	I1209 05:55:32.902460 1813698 cri.go:89] found id: "f8b656cd67507995d80e225eb30dd7eac6852d52de0d039d7c73b9215073240a"
	I1209 05:55:32.902463 1813698 cri.go:89] found id: "dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144"
	I1209 05:55:32.902471 1813698 cri.go:89] found id: "4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74"
	I1209 05:55:32.902474 1813698 cri.go:89] found id: "bed22ccb7174d8573b035c5745652a3502746577e2e3a33bcf8c0809160eb6c7"
	I1209 05:55:32.902477 1813698 cri.go:89] found id: "0ad5fb714599bb552554da63005578d1324d21b770795097c0345f14a0df959b"
	I1209 05:55:32.902480 1813698 cri.go:89] found id: "ce2dba3551b996a64989bc788883d90b88da5c42ffb10f2cabc3d025a51486ef"
	I1209 05:55:32.902482 1813698 cri.go:89] found id: "09de1532a88e8f8b72157dc2ccf11c063c29511bd00d34eaf7f74ba8870c0f63"
	I1209 05:55:32.902498 1813698 cri.go:89] found id: ""
	I1209 05:55:32.902553 1813698 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 05:55:32.917714 1813698 retry.go:31] will retry after 309.54636ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T05:55:32Z" level=error msg="open /run/runc: no such file or directory"
	I1209 05:55:33.228313 1813698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:55:33.242478 1813698 pause.go:52] kubelet running: false
	I1209 05:55:33.242543 1813698 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 05:55:33.404937 1813698 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 05:55:33.405082 1813698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 05:55:33.476117 1813698 cri.go:89] found id: "712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95"
	I1209 05:55:33.476135 1813698 cri.go:89] found id: "9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6"
	I1209 05:55:33.476140 1813698 cri.go:89] found id: "e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76"
	I1209 05:55:33.476144 1813698 cri.go:89] found id: "3d49925b5eff2c3fa79ae102703805c6c5ecb00d2bf50c997425bb41b782fedf"
	I1209 05:55:33.476148 1813698 cri.go:89] found id: "900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1"
	I1209 05:55:33.476151 1813698 cri.go:89] found id: "99893bf59b7055a42e8a3f852c218a2a871130283bcb5b180e6d8773a8a89cff"
	I1209 05:55:33.476154 1813698 cri.go:89] found id: "2c3e701aebc8fd7c9cacf4d34b3b6b1278c4671dcdaaffb6a19b9c2a9760602f"
	I1209 05:55:33.476157 1813698 cri.go:89] found id: "f8b656cd67507995d80e225eb30dd7eac6852d52de0d039d7c73b9215073240a"
	I1209 05:55:33.476160 1813698 cri.go:89] found id: "dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144"
	I1209 05:55:33.476169 1813698 cri.go:89] found id: "4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74"
	I1209 05:55:33.476176 1813698 cri.go:89] found id: "bed22ccb7174d8573b035c5745652a3502746577e2e3a33bcf8c0809160eb6c7"
	I1209 05:55:33.476180 1813698 cri.go:89] found id: "0ad5fb714599bb552554da63005578d1324d21b770795097c0345f14a0df959b"
	I1209 05:55:33.476182 1813698 cri.go:89] found id: "ce2dba3551b996a64989bc788883d90b88da5c42ffb10f2cabc3d025a51486ef"
	I1209 05:55:33.476185 1813698 cri.go:89] found id: "09de1532a88e8f8b72157dc2ccf11c063c29511bd00d34eaf7f74ba8870c0f63"
	I1209 05:55:33.476188 1813698 cri.go:89] found id: ""
	I1209 05:55:33.476236 1813698 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 05:55:33.489482 1813698 retry.go:31] will retry after 427.41741ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T05:55:33Z" level=error msg="open /run/runc: no such file or directory"
	I1209 05:55:33.917084 1813698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:55:33.937659 1813698 pause.go:52] kubelet running: false
	I1209 05:55:33.937741 1813698 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1209 05:55:34.140830 1813698 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1209 05:55:34.140916 1813698 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1209 05:55:34.245640 1813698 cri.go:89] found id: "712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95"
	I1209 05:55:34.245661 1813698 cri.go:89] found id: "9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6"
	I1209 05:55:34.245665 1813698 cri.go:89] found id: "e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76"
	I1209 05:55:34.245669 1813698 cri.go:89] found id: "3d49925b5eff2c3fa79ae102703805c6c5ecb00d2bf50c997425bb41b782fedf"
	I1209 05:55:34.245672 1813698 cri.go:89] found id: "900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1"
	I1209 05:55:34.245676 1813698 cri.go:89] found id: "99893bf59b7055a42e8a3f852c218a2a871130283bcb5b180e6d8773a8a89cff"
	I1209 05:55:34.245679 1813698 cri.go:89] found id: "2c3e701aebc8fd7c9cacf4d34b3b6b1278c4671dcdaaffb6a19b9c2a9760602f"
	I1209 05:55:34.245682 1813698 cri.go:89] found id: "f8b656cd67507995d80e225eb30dd7eac6852d52de0d039d7c73b9215073240a"
	I1209 05:55:34.245685 1813698 cri.go:89] found id: "dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144"
	I1209 05:55:34.245704 1813698 cri.go:89] found id: "4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74"
	I1209 05:55:34.245708 1813698 cri.go:89] found id: "bed22ccb7174d8573b035c5745652a3502746577e2e3a33bcf8c0809160eb6c7"
	I1209 05:55:34.245712 1813698 cri.go:89] found id: "0ad5fb714599bb552554da63005578d1324d21b770795097c0345f14a0df959b"
	I1209 05:55:34.245715 1813698 cri.go:89] found id: "ce2dba3551b996a64989bc788883d90b88da5c42ffb10f2cabc3d025a51486ef"
	I1209 05:55:34.245720 1813698 cri.go:89] found id: "09de1532a88e8f8b72157dc2ccf11c063c29511bd00d34eaf7f74ba8870c0f63"
	I1209 05:55:34.245723 1813698 cri.go:89] found id: ""
	I1209 05:55:34.245777 1813698 ssh_runner.go:195] Run: sudo runc list -f json
	I1209 05:55:34.263106 1813698 out.go:203] 
	W1209 05:55:34.267023 1813698 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T05:55:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T05:55:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1209 05:55:34.267046 1813698 out.go:285] * 
	* 
	W1209 05:55:34.280728 1813698 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 05:55:34.285629 1813698 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-360536 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-360536
helpers_test.go:243: (dbg) docker inspect pause-360536:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b",
	        "Created": "2025-12-09T05:53:36.696756206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1805399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:53:36.795466676Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/hosts",
	        "LogPath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b-json.log",
	        "Name": "/pause-360536",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-360536:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-360536",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b",
	                "LowerDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-360536",
	                "Source": "/var/lib/docker/volumes/pause-360536/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-360536",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-360536",
	                "name.minikube.sigs.k8s.io": "pause-360536",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f2ab4ec541101388be2b1be30b0b9d92f0393a7eec555d7b203c81717a84cd2",
	            "SandboxKey": "/var/run/docker/netns/3f2ab4ec5411",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34521"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34522"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34523"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34524"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-360536": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:53:6a:e8:f6:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "317b305b019c9050f0340356c359b45cb680e15f44e74e98f478925f59aebd62",
	                    "EndpointID": "5fbbe84dfe62787f30e57d5c612773d900cdfe96953bb114794656847f498c50",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-360536",
	                        "7ef4edc82fd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-360536 -n pause-360536
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-360536 -n pause-360536: exit status 2 (434.067087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-360536 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-360536 logs -n 25: (1.839254196s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                  ARGS                                                                                                                  │         PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p docker-network-039086 --network=bridge                                                                                                                                                                                              │ docker-network-039086   │ jenkins │ v1.37.0 │ 09 Dec 25 05:20 UTC │ 09 Dec 25 05:21 UTC │
	│ delete  │ -p docker-network-039086                                                                                                                                                                                                               │ docker-network-039086   │ jenkins │ v1.37.0 │ 09 Dec 25 05:21 UTC │ 09 Dec 25 05:21 UTC │
	│ start   │ -p existing-network-135598 --network=existing-network                                                                                                                                                                                  │ existing-network-135598 │ jenkins │ v1.37.0 │ 09 Dec 25 05:21 UTC │ 09 Dec 25 05:22 UTC │
	│ delete  │ -p existing-network-135598                                                                                                                                                                                                             │ existing-network-135598 │ jenkins │ v1.37.0 │ 09 Dec 25 05:22 UTC │ 09 Dec 25 05:22 UTC │
	│ start   │ -p custom-subnet-780574 --subnet=192.168.60.0/24                                                                                                                                                                                       │ custom-subnet-780574    │ jenkins │ v1.37.0 │ 09 Dec 25 05:22 UTC │ 09 Dec 25 05:22 UTC │
	│ delete  │ -p custom-subnet-780574                                                                                                                                                                                                                │ custom-subnet-780574    │ jenkins │ v1.37.0 │ 09 Dec 25 05:22 UTC │ 09 Dec 25 05:22 UTC │
	│ start   │ -p static-ip-088572 --static-ip=192.168.200.200                                                                                                                                                                                        │ static-ip-088572        │ jenkins │ v1.37.0 │ 09 Dec 25 05:22 UTC │ 09 Dec 25 05:23 UTC │
	│ ip      │ static-ip-088572 ip                                                                                                                                                                                                                    │ static-ip-088572        │ jenkins │ v1.37.0 │ 09 Dec 25 05:23 UTC │ 09 Dec 25 05:23 UTC │
	│ delete  │ -p static-ip-088572                                                                                                                                                                                                                    │ static-ip-088572        │ jenkins │ v1.37.0 │ 09 Dec 25 05:23 UTC │ 09 Dec 25 05:23 UTC │
	│ start   │ -p first-207634 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ first-207634            │ jenkins │ v1.37.0 │ 09 Dec 25 05:23 UTC │ 09 Dec 25 05:23 UTC │
	│ start   │ -p second-210252 --driver=docker  --container-runtime=crio                                                                                                                                                                             │ second-210252           │ jenkins │ v1.37.0 │ 09 Dec 25 05:23 UTC │ 09 Dec 25 05:24 UTC │
	│ delete  │ -p second-210252                                                                                                                                                                                                                       │ second-210252           │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ delete  │ -p first-207634                                                                                                                                                                                                                        │ first-207634            │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ start   │ -p mount-start-1-890282 --memory=3072 --mount-string /tmp/TestMountStartserial493503801/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio │ mount-start-1-890282    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ ssh     │ mount-start-1-890282 ssh -- ls /minikube-host                                                                                                                                                                                          │ mount-start-1-890282    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ start   │ -p mount-start-2-892227 --memory=3072 --mount-string /tmp/TestMountStartserial493503801/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ ssh     │ mount-start-2-892227 ssh -- ls /minikube-host                                                                                                                                                                                          │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ delete  │ -p mount-start-1-890282 --alsologtostderr -v=5                                                                                                                                                                                         │ mount-start-1-890282    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ ssh     │ mount-start-2-892227 ssh -- ls /minikube-host                                                                                                                                                                                          │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ stop    │ -p mount-start-2-892227                                                                                                                                                                                                                │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:24 UTC │
	│ start   │ -p mount-start-2-892227                                                                                                                                                                                                                │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:24 UTC │ 09 Dec 25 05:25 UTC │
	│ ssh     │ mount-start-2-892227 ssh -- ls /minikube-host                                                                                                                                                                                          │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:25 UTC │ 09 Dec 25 05:25 UTC │
	│ delete  │ -p mount-start-2-892227                                                                                                                                                                                                                │ mount-start-2-892227    │ jenkins │ v1.37.0 │ 09 Dec 25 05:25 UTC │ 09 Dec 25 05:25 UTC │
	│ delete  │ -p mount-start-1-890282                                                                                                                                                                                                                │ mount-start-1-890282    │ jenkins │ v1.37.0 │ 09 Dec 25 05:25 UTC │ 09 Dec 25 05:25 UTC │
	│ start   │ -p multinode-765524 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                               │ multinode-765524        │ jenkins │ v1.37.0 │ 09 Dec 25 05:25 UTC │ 09 Dec 25 05:27 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:55:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:55:03.593397 1810070 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:55:03.593647 1810070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:55:03.593690 1810070 out.go:374] Setting ErrFile to fd 2...
	I1209 05:55:03.593716 1810070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:55:03.594045 1810070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:55:03.594489 1810070 out.go:368] Setting JSON to false
	I1209 05:55:03.596026 1810070 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":38244,"bootTime":1765221460,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 05:55:03.596216 1810070 start.go:143] virtualization:  
	I1209 05:55:03.603107 1810070 out.go:179] * [pause-360536] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:55:03.607025 1810070 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:55:03.607130 1810070 notify.go:221] Checking for updates...
	I1209 05:55:03.613272 1810070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:55:03.616602 1810070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:55:03.619019 1810070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 05:55:03.621125 1810070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:55:03.624256 1810070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:55:03.628911 1810070 config.go:182] Loaded profile config "pause-360536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:55:03.629522 1810070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:55:03.678781 1810070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:55:03.678893 1810070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:55:03.790048 1810070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:55:03.773623827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:55:03.790153 1810070 docker.go:319] overlay module found
	I1209 05:55:03.793370 1810070 out.go:179] * Using the docker driver based on existing profile
	I1209 05:55:03.796489 1810070 start.go:309] selected driver: docker
	I1209 05:55:03.796522 1810070 start.go:927] validating driver "docker" against &{Name:pause-360536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-360536 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:55:03.796668 1810070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:55:03.796924 1810070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:55:03.909420 1810070 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:55:03.897434532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:55:03.909837 1810070 cni.go:84] Creating CNI manager for ""
	I1209 05:55:03.909907 1810070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 05:55:03.909951 1810070 start.go:353] cluster config:
	{Name:pause-360536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-360536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:55:03.915268 1810070 out.go:179] * Starting "pause-360536" primary control-plane node in "pause-360536" cluster
	I1209 05:55:03.919291 1810070 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 05:55:03.923570 1810070 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:55:03.926653 1810070 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 05:55:03.926701 1810070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 05:55:03.926711 1810070 cache.go:65] Caching tarball of preloaded images
	I1209 05:55:03.926809 1810070 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 05:55:03.926819 1810070 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 05:55:03.926967 1810070 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/config.json ...
	I1209 05:55:03.927189 1810070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:55:03.952899 1810070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:55:03.952917 1810070 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:55:03.952932 1810070 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:55:03.952963 1810070 start.go:360] acquireMachinesLock for pause-360536: {Name:mk6f8dbcf8856e7e55d5b6d2c6805d15099c5d00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:55:03.953017 1810070 start.go:364] duration metric: took 37.219µs to acquireMachinesLock for "pause-360536"
	I1209 05:55:03.953036 1810070 start.go:96] Skipping create...Using existing machine configuration
	I1209 05:55:03.953042 1810070 fix.go:54] fixHost starting: 
	I1209 05:55:03.953322 1810070 cli_runner.go:164] Run: docker container inspect pause-360536 --format={{.State.Status}}
	I1209 05:55:03.975716 1810070 fix.go:112] recreateIfNeeded on pause-360536: state=Running err=<nil>
	W1209 05:55:03.975763 1810070 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 05:55:03.978017 1810070 out.go:252] * Updating the running docker "pause-360536" container ...
	I1209 05:55:03.978050 1810070 machine.go:94] provisionDockerMachine start ...
	I1209 05:55:03.978128 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:04.006113 1810070 main.go:143] libmachine: Using SSH client type: native
	I1209 05:55:04.006487 1810070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34521 <nil> <nil>}
	I1209 05:55:04.006497 1810070 main.go:143] libmachine: About to run SSH command:
	hostname
	I1209 05:55:04.208430 1810070 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-360536
	
	I1209 05:55:04.208504 1810070 ubuntu.go:182] provisioning hostname "pause-360536"
	I1209 05:55:04.208637 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:04.244950 1810070 main.go:143] libmachine: Using SSH client type: native
	I1209 05:55:04.245258 1810070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34521 <nil> <nil>}
	I1209 05:55:04.245268 1810070 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-360536 && echo "pause-360536" | sudo tee /etc/hostname
	I1209 05:55:04.467100 1810070 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-360536
	
	I1209 05:55:04.467266 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:04.501805 1810070 main.go:143] libmachine: Using SSH client type: native
	I1209 05:55:04.502129 1810070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34521 <nil> <nil>}
	I1209 05:55:04.502144 1810070 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-360536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-360536/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-360536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 05:55:04.695982 1810070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1209 05:55:04.696012 1810070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1577059/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1577059/.minikube}
	I1209 05:55:04.696042 1810070 ubuntu.go:190] setting up certificates
	I1209 05:55:04.696059 1810070 provision.go:84] configureAuth start
	I1209 05:55:04.696133 1810070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-360536
	I1209 05:55:04.723160 1810070 provision.go:143] copyHostCerts
	I1209 05:55:04.723235 1810070 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem, removing ...
	I1209 05:55:04.723250 1810070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem
	I1209 05:55:04.723320 1810070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.pem (1078 bytes)
	I1209 05:55:04.723572 1810070 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem, removing ...
	I1209 05:55:04.723588 1810070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem
	I1209 05:55:04.723628 1810070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/cert.pem (1123 bytes)
	I1209 05:55:04.723706 1810070 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem, removing ...
	I1209 05:55:04.723716 1810070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem
	I1209 05:55:04.723743 1810070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1577059/.minikube/key.pem (1675 bytes)
	I1209 05:55:04.723799 1810070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem org=jenkins.pause-360536 san=[127.0.0.1 192.168.85.2 localhost minikube pause-360536]
	I1209 05:55:04.886162 1810070 provision.go:177] copyRemoteCerts
	I1209 05:55:04.886236 1810070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 05:55:04.886284 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:04.907805 1810070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34521 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/pause-360536/id_rsa Username:docker}
	I1209 05:55:05.022423 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 05:55:05.046234 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1209 05:55:05.068475 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 05:55:05.092556 1810070 provision.go:87] duration metric: took 396.469117ms to configureAuth
	I1209 05:55:05.092585 1810070 ubuntu.go:206] setting minikube options for container-runtime
	I1209 05:55:05.092803 1810070 config.go:182] Loaded profile config "pause-360536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:55:05.092927 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:05.114939 1810070 main.go:143] libmachine: Using SSH client type: native
	I1209 05:55:05.115515 1810070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil>  [] 0s} 127.0.0.1 34521 <nil> <nil>}
	I1209 05:55:05.115533 1810070 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 05:55:10.547978 1810070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 05:55:10.548001 1810070 machine.go:97] duration metric: took 6.569943159s to provisionDockerMachine
	I1209 05:55:10.548013 1810070 start.go:293] postStartSetup for "pause-360536" (driver="docker")
	I1209 05:55:10.548070 1810070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 05:55:10.548141 1810070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 05:55:10.548206 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:10.569048 1810070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34521 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/pause-360536/id_rsa Username:docker}
	I1209 05:55:10.675095 1810070 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 05:55:10.678304 1810070 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 05:55:10.678333 1810070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1209 05:55:10.678344 1810070 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/addons for local assets ...
	I1209 05:55:10.678399 1810070 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1577059/.minikube/files for local assets ...
	I1209 05:55:10.678477 1810070 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem -> 15805212.pem in /etc/ssl/certs
	I1209 05:55:10.678617 1810070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 05:55:10.686240 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:55:10.705166 1810070 start.go:296] duration metric: took 157.092249ms for postStartSetup
	I1209 05:55:10.705261 1810070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:55:10.705299 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:10.723857 1810070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34521 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/pause-360536/id_rsa Username:docker}
	I1209 05:55:10.828194 1810070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 05:55:10.833497 1810070 fix.go:56] duration metric: took 6.880448684s for fixHost
	I1209 05:55:10.833525 1810070 start.go:83] releasing machines lock for "pause-360536", held for 6.880498572s
	I1209 05:55:10.833611 1810070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-360536
	I1209 05:55:10.856123 1810070 ssh_runner.go:195] Run: cat /version.json
	I1209 05:55:10.856161 1810070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 05:55:10.856176 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:10.856227 1810070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-360536
	I1209 05:55:10.880890 1810070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34521 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/pause-360536/id_rsa Username:docker}
	I1209 05:55:10.896340 1810070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34521 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/pause-360536/id_rsa Username:docker}
	I1209 05:55:11.074315 1810070 ssh_runner.go:195] Run: systemctl --version
	I1209 05:55:11.081174 1810070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 05:55:11.138319 1810070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 05:55:11.143780 1810070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 05:55:11.143868 1810070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 05:55:11.153190 1810070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1209 05:55:11.153229 1810070 start.go:496] detecting cgroup driver to use...
	I1209 05:55:11.153310 1810070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 05:55:11.153413 1810070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 05:55:11.169397 1810070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 05:55:11.183693 1810070 docker.go:218] disabling cri-docker service (if available) ...
	I1209 05:55:11.183784 1810070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 05:55:11.200259 1810070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 05:55:11.216960 1810070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 05:55:11.374116 1810070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 05:55:11.522860 1810070 docker.go:234] disabling docker service ...
	I1209 05:55:11.522940 1810070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 05:55:11.538244 1810070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 05:55:11.553033 1810070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 05:55:11.710395 1810070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 05:55:11.857661 1810070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 05:55:11.873367 1810070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 05:55:11.887573 1810070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1209 05:55:11.887680 1810070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.897416 1810070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 05:55:11.897524 1810070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.906361 1810070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.915457 1810070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.925156 1810070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 05:55:11.938964 1810070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.949810 1810070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.959205 1810070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 05:55:11.969053 1810070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 05:55:11.976877 1810070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 05:55:11.985002 1810070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:55:12.150237 1810070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 05:55:12.551119 1810070 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 05:55:12.551231 1810070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 05:55:12.555345 1810070 start.go:564] Will wait 60s for crictl version
	I1209 05:55:12.555431 1810070 ssh_runner.go:195] Run: which crictl
	I1209 05:55:12.559269 1810070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1209 05:55:12.584753 1810070 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1209 05:55:12.584878 1810070 ssh_runner.go:195] Run: crio --version
	I1209 05:55:12.615073 1810070 ssh_runner.go:195] Run: crio --version
	I1209 05:55:12.650665 1810070 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1209 05:55:12.653542 1810070 cli_runner.go:164] Run: docker network inspect pause-360536 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 05:55:12.670337 1810070 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1209 05:55:12.674491 1810070 kubeadm.go:884] updating cluster {Name:pause-360536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-360536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 05:55:12.674689 1810070 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 05:55:12.674751 1810070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:55:12.710388 1810070 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 05:55:12.710419 1810070 crio.go:433] Images already preloaded, skipping extraction
	I1209 05:55:12.710484 1810070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 05:55:12.738643 1810070 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 05:55:12.738668 1810070 cache_images.go:86] Images are preloaded, skipping loading
	I1209 05:55:12.738680 1810070 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1209 05:55:12.738823 1810070 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-360536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-360536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 05:55:12.738930 1810070 ssh_runner.go:195] Run: crio config
	I1209 05:55:12.815132 1810070 cni.go:84] Creating CNI manager for ""
	I1209 05:55:12.815156 1810070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 05:55:12.815173 1810070 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1209 05:55:12.815200 1810070 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-360536 NodeName:pause-360536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 05:55:12.815389 1810070 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-360536"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 05:55:12.815472 1810070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1209 05:55:12.824993 1810070 binaries.go:51] Found k8s binaries, skipping transfer
	I1209 05:55:12.825069 1810070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 05:55:12.846692 1810070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1209 05:55:12.864775 1810070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 05:55:12.883334 1810070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1209 05:55:12.903702 1810070 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1209 05:55:12.909065 1810070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:55:13.114290 1810070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:55:13.139600 1810070 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536 for IP: 192.168.85.2
	I1209 05:55:13.139622 1810070 certs.go:195] generating shared ca certs ...
	I1209 05:55:13.139639 1810070 certs.go:227] acquiring lock for ca certs: {Name:mkbe8bce08db7aa945866791683d426e1b560718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:55:13.139786 1810070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key
	I1209 05:55:13.139854 1810070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key
	I1209 05:55:13.139872 1810070 certs.go:257] generating profile certs ...
	I1209 05:55:13.139979 1810070 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/client.key
	I1209 05:55:13.140056 1810070 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/apiserver.key.684fa11e
	I1209 05:55:13.140105 1810070 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/proxy-client.key
	I1209 05:55:13.140244 1810070 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem (1338 bytes)
	W1209 05:55:13.140288 1810070 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521_empty.pem, impossibly tiny 0 bytes
	I1209 05:55:13.140300 1810070 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 05:55:13.140359 1810070 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/ca.pem (1078 bytes)
	I1209 05:55:13.140398 1810070 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/cert.pem (1123 bytes)
	I1209 05:55:13.140429 1810070 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/key.pem (1675 bytes)
	I1209 05:55:13.140487 1810070 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem (1708 bytes)
	I1209 05:55:13.141164 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 05:55:13.163825 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 05:55:13.190953 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 05:55:13.214392 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 05:55:13.245194 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 05:55:13.264256 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 05:55:13.283544 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 05:55:13.314824 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 05:55:13.346785 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/certs/1580521.pem --> /usr/share/ca-certificates/1580521.pem (1338 bytes)
	I1209 05:55:13.423800 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/ssl/certs/15805212.pem --> /usr/share/ca-certificates/15805212.pem (1708 bytes)
	I1209 05:55:13.487780 1810070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 05:55:13.528858 1810070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 05:55:13.601081 1810070 ssh_runner.go:195] Run: openssl version
	I1209 05:55:13.611467 1810070 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1580521.pem
	I1209 05:55:13.627465 1810070 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1580521.pem /etc/ssl/certs/1580521.pem
	I1209 05:55:13.663298 1810070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1580521.pem
	I1209 05:55:13.670267 1810070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 04:27 /usr/share/ca-certificates/1580521.pem
	I1209 05:55:13.670347 1810070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1580521.pem
	I1209 05:55:13.768142 1810070 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1209 05:55:13.778434 1810070 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/15805212.pem
	I1209 05:55:13.803535 1810070 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/15805212.pem /etc/ssl/certs/15805212.pem
	I1209 05:55:13.845037 1810070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15805212.pem
	I1209 05:55:13.858802 1810070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 04:27 /usr/share/ca-certificates/15805212.pem
	I1209 05:55:13.858880 1810070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15805212.pem
	I1209 05:55:13.938433 1810070 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1209 05:55:13.950198 1810070 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:55:13.965235 1810070 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1209 05:55:13.980505 1810070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:55:13.987191 1810070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 04:17 /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:55:13.987263 1810070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 05:55:14.056243 1810070 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1209 05:55:14.064288 1810070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 05:55:14.072645 1810070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 05:55:14.129774 1810070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 05:55:14.182593 1810070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 05:55:14.236732 1810070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 05:55:14.301871 1810070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 05:55:14.356490 1810070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 05:55:14.401868 1810070 kubeadm.go:401] StartCluster: {Name:pause-360536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-360536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:55:14.402039 1810070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 05:55:14.402138 1810070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 05:55:14.461403 1810070 cri.go:89] found id: "712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95"
	I1209 05:55:14.461477 1810070 cri.go:89] found id: "9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6"
	I1209 05:55:14.461495 1810070 cri.go:89] found id: "e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76"
	I1209 05:55:14.461531 1810070 cri.go:89] found id: "3d49925b5eff2c3fa79ae102703805c6c5ecb00d2bf50c997425bb41b782fedf"
	I1209 05:55:14.461566 1810070 cri.go:89] found id: "900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1"
	I1209 05:55:14.461586 1810070 cri.go:89] found id: "99893bf59b7055a42e8a3f852c218a2a871130283bcb5b180e6d8773a8a89cff"
	I1209 05:55:14.461606 1810070 cri.go:89] found id: "2c3e701aebc8fd7c9cacf4d34b3b6b1278c4671dcdaaffb6a19b9c2a9760602f"
	I1209 05:55:14.461624 1810070 cri.go:89] found id: "f8b656cd67507995d80e225eb30dd7eac6852d52de0d039d7c73b9215073240a"
	I1209 05:55:14.461659 1810070 cri.go:89] found id: "dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144"
	I1209 05:55:14.461681 1810070 cri.go:89] found id: "4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74"
	I1209 05:55:14.461708 1810070 cri.go:89] found id: "bed22ccb7174d8573b035c5745652a3502746577e2e3a33bcf8c0809160eb6c7"
	I1209 05:55:14.461738 1810070 cri.go:89] found id: "0ad5fb714599bb552554da63005578d1324d21b770795097c0345f14a0df959b"
	I1209 05:55:14.461762 1810070 cri.go:89] found id: "ce2dba3551b996a64989bc788883d90b88da5c42ffb10f2cabc3d025a51486ef"
	I1209 05:55:14.461780 1810070 cri.go:89] found id: "09de1532a88e8f8b72157dc2ccf11c063c29511bd00d34eaf7f74ba8870c0f63"
	I1209 05:55:14.461798 1810070 cri.go:89] found id: ""
	I1209 05:55:14.461879 1810070 ssh_runner.go:195] Run: sudo runc list -f json
	W1209 05:55:14.485847 1810070 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T05:55:14Z" level=error msg="open /run/runc: no such file or directory"
	I1209 05:55:14.485974 1810070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 05:55:14.499234 1810070 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1209 05:55:14.499296 1810070 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1209 05:55:14.499370 1810070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 05:55:14.510082 1810070 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 05:55:14.510849 1810070 kubeconfig.go:125] found "pause-360536" server: "https://192.168.85.2:8443"
	I1209 05:55:14.511675 1810070 kapi.go:59] client config for pause-360536: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 05:55:14.512156 1810070 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1209 05:55:14.512169 1810070 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 05:55:14.512174 1810070 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1209 05:55:14.512179 1810070 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1209 05:55:14.512183 1810070 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 05:55:14.512465 1810070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 05:55:14.522471 1810070 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1209 05:55:14.522553 1810070 kubeadm.go:602] duration metric: took 23.22935ms to restartPrimaryControlPlane
	I1209 05:55:14.522596 1810070 kubeadm.go:403] duration metric: took 120.736007ms to StartCluster
	I1209 05:55:14.522634 1810070 settings.go:142] acquiring lock: {Name:mk2ff9b0d23dc8757d89015af482b8c477568e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:55:14.522709 1810070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:55:14.523797 1810070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/kubeconfig: {Name:mk56da51bd85daae017f7ca18ae73d8a385a4c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:55:14.524090 1810070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 05:55:14.524597 1810070 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 05:55:14.524711 1810070 config.go:182] Loaded profile config "pause-360536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:55:14.528278 1810070 out.go:179] * Verifying Kubernetes components...
	I1209 05:55:14.528392 1810070 out.go:179] * Enabled addons: 
	I1209 05:55:14.531295 1810070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 05:55:14.531412 1810070 addons.go:530] duration metric: took 6.825016ms for enable addons: enabled=[]
	I1209 05:55:14.792931 1810070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 05:55:14.808419 1810070 node_ready.go:35] waiting up to 6m0s for node "pause-360536" to be "Ready" ...
	I1209 05:55:17.965598 1810070 node_ready.go:49] node "pause-360536" is "Ready"
	I1209 05:55:17.965695 1810070 node_ready.go:38] duration metric: took 3.157229549s for node "pause-360536" to be "Ready" ...
	I1209 05:55:17.965726 1810070 api_server.go:52] waiting for apiserver process to appear ...
	I1209 05:55:17.965812 1810070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:55:17.983939 1810070 api_server.go:72] duration metric: took 3.459780915s to wait for apiserver process to appear ...
	I1209 05:55:17.983965 1810070 api_server.go:88] waiting for apiserver healthz status ...
	I1209 05:55:17.983985 1810070 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 05:55:18.085484 1810070 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 05:55:18.085582 1810070 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 05:55:18.484110 1810070 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 05:55:18.493139 1810070 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 05:55:18.493224 1810070 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 05:55:18.984655 1810070 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 05:55:19.005483 1810070 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1209 05:55:19.005519 1810070 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1209 05:55:19.484917 1810070 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1209 05:55:19.498924 1810070 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1209 05:55:19.500135 1810070 api_server.go:141] control plane version: v1.34.2
	I1209 05:55:19.500157 1810070 api_server.go:131] duration metric: took 1.516184265s to wait for apiserver health ...
	I1209 05:55:19.500166 1810070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 05:55:19.507563 1810070 system_pods.go:59] 7 kube-system pods found
	I1209 05:55:19.507677 1810070 system_pods.go:61] "coredns-66bc5c9577-z2ccv" [576ed217-b1ef-4ae7-9488-aac220856947] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 05:55:19.507722 1810070 system_pods.go:61] "etcd-pause-360536" [7ba5a463-e5c6-4f4f-8adc-3e59cd812332] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 05:55:19.507752 1810070 system_pods.go:61] "kindnet-k2bj9" [cc8996cb-ab02-4cd5-b339-c76a346e299e] Running
	I1209 05:55:19.507775 1810070 system_pods.go:61] "kube-apiserver-pause-360536" [074287a8-4ce4-42cc-a875-4d5fc11127b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 05:55:19.507810 1810070 system_pods.go:61] "kube-controller-manager-pause-360536" [8adcc71e-1dc1-4dff-b4f7-617b7daaa847] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 05:55:19.507837 1810070 system_pods.go:61] "kube-proxy-c64ck" [c41f125e-2794-4b73-9e6f-853ef6317344] Running
	I1209 05:55:19.507859 1810070 system_pods.go:61] "kube-scheduler-pause-360536" [debb0359-0f65-48b3-a38b-a982456b736a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 05:55:19.507892 1810070 system_pods.go:74] duration metric: took 7.718884ms to wait for pod list to return data ...
	I1209 05:55:19.507922 1810070 default_sa.go:34] waiting for default service account to be created ...
	I1209 05:55:19.519167 1810070 default_sa.go:45] found service account: "default"
	I1209 05:55:19.519191 1810070 default_sa.go:55] duration metric: took 11.249842ms for default service account to be created ...
	I1209 05:55:19.519202 1810070 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 05:55:19.534988 1810070 system_pods.go:86] 7 kube-system pods found
	I1209 05:55:19.535031 1810070 system_pods.go:89] "coredns-66bc5c9577-z2ccv" [576ed217-b1ef-4ae7-9488-aac220856947] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1209 05:55:19.535046 1810070 system_pods.go:89] "etcd-pause-360536" [7ba5a463-e5c6-4f4f-8adc-3e59cd812332] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1209 05:55:19.535052 1810070 system_pods.go:89] "kindnet-k2bj9" [cc8996cb-ab02-4cd5-b339-c76a346e299e] Running
	I1209 05:55:19.535064 1810070 system_pods.go:89] "kube-apiserver-pause-360536" [074287a8-4ce4-42cc-a875-4d5fc11127b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1209 05:55:19.535075 1810070 system_pods.go:89] "kube-controller-manager-pause-360536" [8adcc71e-1dc1-4dff-b4f7-617b7daaa847] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1209 05:55:19.535079 1810070 system_pods.go:89] "kube-proxy-c64ck" [c41f125e-2794-4b73-9e6f-853ef6317344] Running
	I1209 05:55:19.535162 1810070 system_pods.go:89] "kube-scheduler-pause-360536" [debb0359-0f65-48b3-a38b-a982456b736a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1209 05:55:19.535170 1810070 system_pods.go:126] duration metric: took 15.9619ms to wait for k8s-apps to be running ...
	I1209 05:55:19.535178 1810070 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 05:55:19.535241 1810070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:55:19.582109 1810070 system_svc.go:56] duration metric: took 46.919841ms WaitForService to wait for kubelet
	I1209 05:55:19.582139 1810070 kubeadm.go:587] duration metric: took 5.057989032s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:55:19.582159 1810070 node_conditions.go:102] verifying NodePressure condition ...
	I1209 05:55:19.595096 1810070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1209 05:55:19.595127 1810070 node_conditions.go:123] node cpu capacity is 2
	I1209 05:55:19.595140 1810070 node_conditions.go:105] duration metric: took 12.974906ms to run NodePressure ...
	I1209 05:55:19.595154 1810070 start.go:242] waiting for startup goroutines ...
	I1209 05:55:19.595161 1810070 start.go:247] waiting for cluster config update ...
	I1209 05:55:19.595169 1810070 start.go:256] writing updated cluster config ...
	I1209 05:55:19.595482 1810070 ssh_runner.go:195] Run: rm -f paused
	I1209 05:55:19.599661 1810070 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 05:55:19.600326 1810070 kapi.go:59] client config for pause-360536: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/client.crt", KeyFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/pause-360536/client.key", CAFile:"/home/jenkins/minikube-integration/22081-1577059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 05:55:19.604895 1810070 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z2ccv" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 05:55:21.615295 1810070 pod_ready.go:104] pod "coredns-66bc5c9577-z2ccv" is not "Ready", error: <nil>
	W1209 05:55:24.111924 1810070 pod_ready.go:104] pod "coredns-66bc5c9577-z2ccv" is not "Ready", error: <nil>
	I1209 05:55:26.111322 1810070 pod_ready.go:94] pod "coredns-66bc5c9577-z2ccv" is "Ready"
	I1209 05:55:26.111346 1810070 pod_ready.go:86] duration metric: took 6.506428086s for pod "coredns-66bc5c9577-z2ccv" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:26.115896 1810070 pod_ready.go:83] waiting for pod "etcd-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	W1209 05:55:28.121319 1810070 pod_ready.go:104] pod "etcd-pause-360536" is not "Ready", error: <nil>
	W1209 05:55:30.121853 1810070 pod_ready.go:104] pod "etcd-pause-360536" is not "Ready", error: <nil>
	I1209 05:55:31.121956 1810070 pod_ready.go:94] pod "etcd-pause-360536" is "Ready"
	I1209 05:55:31.121982 1810070 pod_ready.go:86] duration metric: took 5.006062113s for pod "etcd-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.124475 1810070 pod_ready.go:83] waiting for pod "kube-apiserver-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.130916 1810070 pod_ready.go:94] pod "kube-apiserver-pause-360536" is "Ready"
	I1209 05:55:31.130942 1810070 pod_ready.go:86] duration metric: took 6.440093ms for pod "kube-apiserver-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.134717 1810070 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.139658 1810070 pod_ready.go:94] pod "kube-controller-manager-pause-360536" is "Ready"
	I1209 05:55:31.139691 1810070 pod_ready.go:86] duration metric: took 4.946424ms for pod "kube-controller-manager-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.142022 1810070 pod_ready.go:83] waiting for pod "kube-proxy-c64ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.320051 1810070 pod_ready.go:94] pod "kube-proxy-c64ck" is "Ready"
	I1209 05:55:31.320076 1810070 pod_ready.go:86] duration metric: took 178.023623ms for pod "kube-proxy-c64ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.519960 1810070 pod_ready.go:83] waiting for pod "kube-scheduler-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.920451 1810070 pod_ready.go:94] pod "kube-scheduler-pause-360536" is "Ready"
	I1209 05:55:31.920475 1810070 pod_ready.go:86] duration metric: took 400.473884ms for pod "kube-scheduler-pause-360536" in "kube-system" namespace to be "Ready" or be gone ...
	I1209 05:55:31.920488 1810070 pod_ready.go:40] duration metric: took 12.320794543s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1209 05:55:32.048954 1810070 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1209 05:55:32.052477 1810070 out.go:179] * Done! kubectl is now configured to use "pause-360536" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.819962942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.845195037Z" level=info msg="Starting container: e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76" id=d3676d51-6f51-40f2-aa03-f1a0ae0f3613 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.873790647Z" level=info msg="Started container" PID=2329 containerID=900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1 description=kube-system/kube-apiserver-pause-360536/kube-apiserver id=659c0e8b-8475-427f-976e-2bfc432979b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6df2c4a7f1905addee280a0c670291c7a2b470862f8fb82bdec74afd10e86705
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.883054228Z" level=info msg="Created container 9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6: kube-system/kindnet-k2bj9/kindnet-cni" id=3005728b-185d-4610-a443-c739a7a17648 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.883532428Z" level=info msg="Started container" PID=2341 containerID=e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76 description=kube-system/kube-scheduler-pause-360536/kube-scheduler id=d3676d51-6f51-40f2-aa03-f1a0ae0f3613 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4c3ec79a4f58c6b1d6da060320f462421c7eb23f03483883a805fad7fe6446e
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.88796971Z" level=info msg="Starting container: 9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6" id=24ba51e1-4c88-40d5-99e2-5c0294d43592 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.89616544Z" level=info msg="Started container" PID=2356 containerID=9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6 description=kube-system/kindnet-k2bj9/kindnet-cni id=24ba51e1-4c88-40d5-99e2-5c0294d43592 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9417e067c57cd7e5aaa13cdbea7c1fe985ad9a537bebd87288f425a704aef5b3
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.89733145Z" level=info msg="Created container 712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95: kube-system/kube-proxy-c64ck/kube-proxy" id=f4d90bd3-3034-4e70-92b2-ea0374f8ed9f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.898163032Z" level=info msg="Starting container: 712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95" id=98e3f0b7-e4f7-4380-b160-97bbfead89e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.907888854Z" level=info msg="Started container" PID=2372 containerID=712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95 description=kube-system/kube-proxy-c64ck/kube-proxy id=98e3f0b7-e4f7-4380-b160-97bbfead89e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76df82f24f4c054c3d247254330d249186252d6c39e72623229a3808d5f2d7b4
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.36798651Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.371816089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.371974926Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.372063985Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.377662715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.377702814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.377730876Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.380861543Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.380896473Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.380919447Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.384529947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.384978723Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.385061637Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.392586467Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.392769255Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	712b8e299e0d0       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   76df82f24f4c0       kube-proxy-c64ck                       kube-system
	9772e51b9b6c5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   9417e067c57cd       kindnet-k2bj9                          kube-system
	e7467cb13242f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   22 seconds ago       Running             kube-scheduler            1                   e4c3ec79a4f58       kube-scheduler-pause-360536            kube-system
	3d49925b5eff2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   22 seconds ago       Running             kube-controller-manager   1                   c9b47f4981d83       kube-controller-manager-pause-360536   kube-system
	900df275bbf8f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   22 seconds ago       Running             kube-apiserver            1                   6df2c4a7f1905       kube-apiserver-pause-360536            kube-system
	99893bf59b705       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   22 seconds ago       Running             etcd                      1                   b09b77df05d45       etcd-pause-360536                      kube-system
	2c3e701aebc8f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   4b7285ef1ca7a       coredns-66bc5c9577-z2ccv               kube-system
	f8b656cd67507       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   4b7285ef1ca7a       coredns-66bc5c9577-z2ccv               kube-system
	dbc988236c83b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9417e067c57cd       kindnet-k2bj9                          kube-system
	4ac4f4354ea8a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   76df82f24f4c0       kube-proxy-c64ck                       kube-system
	bed22ccb7174d       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   c9b47f4981d83       kube-controller-manager-pause-360536   kube-system
	0ad5fb714599b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   b09b77df05d45       etcd-pause-360536                      kube-system
	ce2dba3551b99       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   e4c3ec79a4f58       kube-scheduler-pause-360536            kube-system
	09de1532a88e8       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   6df2c4a7f1905       kube-apiserver-pause-360536            kube-system
	
	
	==> coredns [2c3e701aebc8fd7c9cacf4d34b3b6b1278c4671dcdaaffb6a19b9c2a9760602f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34316 - 46814 "HINFO IN 4234871655145286167.8698256232489913746. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027834306s
	
	
	==> coredns [f8b656cd67507995d80e225eb30dd7eac6852d52de0d039d7c73b9215073240a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57922 - 7641 "HINFO IN 1582322749603180769.2648909561381567358. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01520181s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-360536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-360536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=pause-360536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T05_54_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 05:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-360536
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 05:55:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-360536
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                11f44272-664a-4239-937b-bd37f60e1949
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z2ccv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-360536                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-k2bj9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-360536             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-360536    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-c64ck                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-360536             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-360536 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-360536 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s (x8 over 95s)  kubelet          Node pause-360536 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-360536 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-360536 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-360536 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           80s                node-controller  Node pause-360536 event: Registered Node pause-360536 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-360536 status is now: NodeReady
	  Warning  ContainerGCFailed        24s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           14s                node-controller  Node pause-360536 event: Registered Node pause-360536 in Controller
	
	
	==> dmesg <==
	[ +51.869899] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:25] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:26] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:27] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:29] overlayfs: idmapped layers are currently not supported
	[ +17.202326] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:30] overlayfs: idmapped layers are currently not supported
	[ +45.070414] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:31] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:32] overlayfs: idmapped layers are currently not supported
	[ +26.464722] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:33] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:38] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:39] overlayfs: idmapped layers are currently not supported
	[  +3.009285] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:40] overlayfs: idmapped layers are currently not supported
	[ +36.331905] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:53] overlayfs: idmapped layers are currently not supported
	[  +0.201178] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ad5fb714599bb552554da63005578d1324d21b770795097c0345f14a0df959b] <==
	{"level":"warn","ts":"2025-12-09T05:54:06.176563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.202012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.229606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.272805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.412341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:16.695723Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.185666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-12-09T05:54:16.711694Z","caller":"traceutil/trace.go:172","msg":"trace[1732159273] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:1; response_revision:355; }","duration":"120.164452ms","start":"2025-12-09T05:54:16.591511Z","end":"2025-12-09T05:54:16.711675Z","steps":["trace[1732159273] 'agreement among raft nodes before linearized reading'  (duration: 103.967087ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T05:55:05.332379Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T05:55:05.332440Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-360536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-09T05:55:05.332530Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T05:55:05.332590Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T05:55:05.615337Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T05:55:05.615399Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-09T05:55:05.615502Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-09T05:55:05.615515Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615746Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615781Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T05:55:05.615789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615832Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615840Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T05:55:05.615846Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T05:55:05.618458Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-09T05:55:05.618527Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T05:55:05.618552Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T05:55:05.618559Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-360536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [99893bf59b7055a42e8a3f852c218a2a871130283bcb5b180e6d8773a8a89cff] <==
	{"level":"warn","ts":"2025-12-09T05:55:16.679138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.702220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.725363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.739353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.758238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.788158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.799797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.813327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.875090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.882585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.906499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.927470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.944914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.966643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.989781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.001317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.022110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.037438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.054761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.075407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.104118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.124157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.149340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.166755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.246218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41668","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:55:36 up 10:37,  0 user,  load average: 2.48, 2.09, 1.97
	Linux pause-360536 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6] <==
	I1209 05:55:14.154346       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 05:55:14.154565       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 05:55:14.154770       1 main.go:148] setting mtu 1500 for CNI 
	I1209 05:55:14.154788       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 05:55:14.154801       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T05:55:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 05:55:14.393483       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 05:55:14.393580       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 05:55:14.393616       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 05:55:14.426724       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 05:55:18.196605       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 05:55:18.196643       1 metrics.go:72] Registering metrics
	I1209 05:55:18.196711       1 controller.go:711] "Syncing nftables rules"
	I1209 05:55:24.367548       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 05:55:24.367699       1 main.go:301] handling current node
	I1209 05:55:34.369124       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 05:55:34.369187       1 main.go:301] handling current node
	
	
	==> kindnet [dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144] <==
	I1209 05:54:19.529873       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 05:54:19.530297       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 05:54:19.530492       1 main.go:148] setting mtu 1500 for CNI 
	I1209 05:54:19.530556       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 05:54:19.530638       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T05:54:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 05:54:19.823787       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 05:54:19.823813       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 05:54:19.823822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 05:54:19.824569       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1209 05:54:49.824253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1209 05:54:49.824379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1209 05:54:49.824473       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1209 05:54:49.824546       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1209 05:54:51.224508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 05:54:51.224645       1 metrics.go:72] Registering metrics
	I1209 05:54:51.224830       1 controller.go:711] "Syncing nftables rules"
	I1209 05:54:59.827681       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 05:54:59.827733       1 main.go:301] handling current node
	
	
	==> kube-apiserver [09de1532a88e8f8b72157dc2ccf11c063c29511bd00d34eaf7f74ba8870c0f63] <==
	W1209 05:55:05.365241       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.365798       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.365990       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.366672       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.366906       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.367150       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.367381       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368020       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368103       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368147       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368182       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368216       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368256       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368295       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368332       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369042       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369141       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369182       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369462       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369505       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369801       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369856       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.370158       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.370233       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1] <==
	I1209 05:55:18.036884       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 05:55:18.049962       1 aggregator.go:171] initial CRD sync complete...
	I1209 05:55:18.049984       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 05:55:18.049992       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 05:55:18.050000       1 cache.go:39] Caches are synced for autoregister controller
	I1209 05:55:18.056009       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 05:55:18.074742       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1209 05:55:18.074769       1 policy_source.go:240] refreshing policies
	I1209 05:55:18.084578       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 05:55:18.091913       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 05:55:18.092017       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 05:55:18.092176       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 05:55:18.104538       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 05:55:18.104860       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1209 05:55:18.104912       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 05:55:18.105566       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 05:55:18.162734       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 05:55:18.162903       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1209 05:55:18.194744       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 05:55:18.809153       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 05:55:19.977462       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 05:55:21.562429       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 05:55:21.630506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 05:55:21.730794       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 05:55:21.795521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3d49925b5eff2c3fa79ae102703805c6c5ecb00d2bf50c997425bb41b782fedf] <==
	I1209 05:55:21.499540       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 05:55:21.502930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 05:55:21.506228       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 05:55:21.509437       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 05:55:21.512769       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 05:55:21.517898       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1209 05:55:21.517983       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 05:55:21.518011       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 05:55:21.518016       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 05:55:21.518021       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 05:55:21.520813       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 05:55:21.522064       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1209 05:55:21.522862       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 05:55:21.526347       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 05:55:21.532530       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 05:55:21.532638       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 05:55:21.532678       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 05:55:21.533303       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 05:55:21.533373       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 05:55:21.535776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 05:55:21.539244       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 05:55:21.540810       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 05:55:21.545129       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 05:55:21.546877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 05:55:21.551587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-controller-manager [bed22ccb7174d8573b035c5745652a3502746577e2e3a33bcf8c0809160eb6c7] <==
	I1209 05:54:15.643529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 05:54:15.643598       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1209 05:54:15.646464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 05:54:15.649744       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 05:54:15.649801       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 05:54:15.653271       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1209 05:54:15.654809       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 05:54:15.662872       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1209 05:54:15.662921       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 05:54:15.662951       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 05:54:15.662956       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 05:54:15.662961       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 05:54:15.669266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 05:54:15.675025       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-360536" podCIDRs=["10.244.0.0/24"]
	I1209 05:54:15.675081       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 05:54:15.684357       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 05:54:15.684619       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 05:54:15.684912       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 05:54:15.685125       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 05:54:15.686230       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 05:54:15.686369       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 05:54:15.689290       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 05:54:15.689341       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 05:54:15.701037       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 05:55:00.639625       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74] <==
	I1209 05:54:19.481991       1 server_linux.go:53] "Using iptables proxy"
	I1209 05:54:19.580565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 05:54:19.687372       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 05:54:19.687441       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1209 05:54:19.687537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 05:54:19.710354       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 05:54:19.710486       1 server_linux.go:132] "Using iptables Proxier"
	I1209 05:54:19.714931       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 05:54:19.715327       1 server.go:527] "Version info" version="v1.34.2"
	I1209 05:54:19.715400       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 05:54:19.720882       1 config.go:200] "Starting service config controller"
	I1209 05:54:19.720908       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 05:54:19.720929       1 config.go:106] "Starting endpoint slice config controller"
	I1209 05:54:19.720934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 05:54:19.720944       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 05:54:19.720948       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 05:54:19.721652       1 config.go:309] "Starting node config controller"
	I1209 05:54:19.721681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 05:54:19.721688       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 05:54:19.821390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 05:54:19.821392       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 05:54:19.821481       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95] <==
	I1209 05:55:15.562494       1 server_linux.go:53] "Using iptables proxy"
	I1209 05:55:16.962666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 05:55:18.264302       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 05:55:18.264434       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1209 05:55:18.264546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 05:55:18.304004       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 05:55:18.304134       1 server_linux.go:132] "Using iptables Proxier"
	I1209 05:55:18.324381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 05:55:18.324755       1 server.go:527] "Version info" version="v1.34.2"
	I1209 05:55:18.325283       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 05:55:18.330785       1 config.go:200] "Starting service config controller"
	I1209 05:55:18.330875       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 05:55:18.330922       1 config.go:106] "Starting endpoint slice config controller"
	I1209 05:55:18.330990       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 05:55:18.331028       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 05:55:18.331076       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 05:55:18.333153       1 config.go:309] "Starting node config controller"
	I1209 05:55:18.333783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 05:55:18.333847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 05:55:18.431983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 05:55:18.431992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 05:55:18.432047       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ce2dba3551b996a64989bc788883d90b88da5c42ffb10f2cabc3d025a51486ef] <==
	E1209 05:54:07.966166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 05:54:07.966204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 05:54:07.966284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 05:54:07.966323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 05:54:07.966354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 05:54:07.966393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 05:54:07.966432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 05:54:07.966469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 05:54:08.780359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 05:54:08.875685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1209 05:54:08.886854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 05:54:08.972156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 05:54:08.987988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 05:54:09.019046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 05:54:09.043124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 05:54:09.054066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 05:54:09.091564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 05:54:09.133252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 05:54:09.153172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1209 05:54:11.855876       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 05:55:05.326675       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 05:55:05.326706       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 05:55:05.327759       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 05:55:05.327817       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 05:55:05.327833       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76] <==
	I1209 05:55:15.485328       1 serving.go:386] Generated self-signed cert in-memory
	W1209 05:55:17.968231       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 05:55:17.968374       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 05:55:17.968414       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 05:55:17.968444       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 05:55:18.083861       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 05:55:18.083956       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 05:55:18.100130       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 05:55:18.101937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 05:55:18.106857       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 05:55:18.101961       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 05:55:18.210860       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.449273    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57b1e1807fd819fa7ad5ea62934d0125" pod="kube-system/kube-apiserver-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.449728    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba2558a3cdec79a8a34617c6d55e31ae" pod="kube-system/kube-scheduler-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.450099    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3aaad2ee6567158e025c9d08ddef2553" pod="kube-system/etcd-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.450414    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3920af4bed5f906a5428f98a986f2ee6" pod="kube-system/kube-controller-manager-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: I1209 05:55:13.576149    1341 scope.go:117] "RemoveContainer" containerID="dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.578985    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57b1e1807fd819fa7ad5ea62934d0125" pod="kube-system/kube-apiserver-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.579779    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba2558a3cdec79a8a34617c6d55e31ae" pod="kube-system/kube-scheduler-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.581010    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3aaad2ee6567158e025c9d08ddef2553" pod="kube-system/etcd-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.581619    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3920af4bed5f906a5428f98a986f2ee6" pod="kube-system/kube-controller-manager-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.582747    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-k2bj9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc8996cb-ab02-4cd5-b339-c76a346e299e" pod="kube-system/kindnet-k2bj9"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.584280    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z2ccv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="576ed217-b1ef-4ae7-9488-aac220856947" pod="kube-system/coredns-66bc5c9577-z2ccv"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.605208    1341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-360536?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="1.6s"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: I1209 05:55:13.704158    1341 scope.go:117] "RemoveContainer" containerID="4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.704893    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-k2bj9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc8996cb-ab02-4cd5-b339-c76a346e299e" pod="kube-system/kindnet-k2bj9"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.705786    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z2ccv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="576ed217-b1ef-4ae7-9488-aac220856947" pod="kube-system/coredns-66bc5c9577-z2ccv"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.706187    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57b1e1807fd819fa7ad5ea62934d0125" pod="kube-system/kube-apiserver-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.706496    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba2558a3cdec79a8a34617c6d55e31ae" pod="kube-system/kube-scheduler-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.706838    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3aaad2ee6567158e025c9d08ddef2553" pod="kube-system/etcd-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.707252    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3920af4bed5f906a5428f98a986f2ee6" pod="kube-system/kube-controller-manager-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.707639    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c64ck\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c41f125e-2794-4b73-9e6f-853ef6317344" pod="kube-system/kube-proxy-c64ck"
	Dec 09 05:55:21 pause-360536 kubelet[1341]: W1209 05:55:21.365348    1341 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 09 05:55:31 pause-360536 kubelet[1341]: W1209 05:55:31.380907    1341 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 09 05:55:32 pause-360536 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 05:55:32 pause-360536 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 05:55:32 pause-360536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-360536 -n pause-360536
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-360536 -n pause-360536: exit status 2 (464.636519ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-360536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-360536
helpers_test.go:243: (dbg) docker inspect pause-360536:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b",
	        "Created": "2025-12-09T05:53:36.696756206Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1805399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-09T05:53:36.795466676Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
	        "ResolvConfPath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/hosts",
	        "LogPath": "/var/lib/docker/containers/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b/7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b-json.log",
	        "Name": "/pause-360536",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-360536:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-360536",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ef4edc82fd93becc6fbfce57c59dee39b8eca432f255dbaccee9c853ab29d4b",
	                "LowerDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af-init/diff:/var/lib/docker/overlay2/cb3f2b8eaaa8875b2899fccd39c4eec1759909855a0b804bc10246bdeabb16ed/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e3a5dd97f10bc064a669d7fe74168874efc91608eb9e84a99bb978dd23fd9af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-360536",
	                "Source": "/var/lib/docker/volumes/pause-360536/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-360536",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-360536",
	                "name.minikube.sigs.k8s.io": "pause-360536",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f2ab4ec541101388be2b1be30b0b9d92f0393a7eec555d7b203c81717a84cd2",
	            "SandboxKey": "/var/run/docker/netns/3f2ab4ec5411",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34521"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34522"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34523"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34524"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-360536": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:53:6a:e8:f6:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "317b305b019c9050f0340356c359b45cb680e15f44e74e98f478925f59aebd62",
	                    "EndpointID": "5fbbe84dfe62787f30e57d5c612773d900cdfe96953bb114794656847f498c50",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-360536",
	                        "7ef4edc82fd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-360536 -n pause-360536
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-360536 -n pause-360536: exit status 2 (433.064229ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-360536 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-360536 logs -n 25: (2.473648271s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-880308 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ ssh     │ -p auto-880308 sudo systemctl cat docker --no-pager                                                                                      │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cat /etc/docker/daemon.json                                                                                          │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ ssh     │ -p auto-880308 sudo docker system info                                                                                                   │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ ssh     │ -p auto-880308 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ ssh     │ -p auto-880308 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ ssh     │ -p auto-880308 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cri-dockerd --version                                                                                                │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ ssh     │ -p auto-880308 sudo systemctl cat containerd --no-pager                                                                                  │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo cat /etc/containerd/config.toml                                                                                      │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo containerd config dump                                                                                               │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo systemctl cat crio --no-pager                                                                                        │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ ssh     │ -p auto-880308 sudo crio config                                                                                                          │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ delete  │ -p auto-880308                                                                                                                           │ auto-880308    │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │ 09 Dec 25 05:55 UTC │
	│ pause   │ -p pause-360536 --alsologtostderr -v=5                                                                                                   │ pause-360536   │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	│ start   │ -p kindnet-880308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-880308 │ jenkins │ v1.37.0 │ 09 Dec 25 05:55 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 05:55:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 05:55:34.913375 1814267 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:55:34.913631 1814267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:55:34.913661 1814267 out.go:374] Setting ErrFile to fd 2...
	I1209 05:55:34.913681 1814267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:55:34.914149 1814267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:55:34.914877 1814267 out.go:368] Setting JSON to false
	I1209 05:55:34.916332 1814267 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":38275,"bootTime":1765221460,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 05:55:34.916440 1814267 start.go:143] virtualization:  
	I1209 05:55:34.920398 1814267 out.go:179] * [kindnet-880308] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 05:55:34.924500 1814267 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 05:55:34.924693 1814267 notify.go:221] Checking for updates...
	I1209 05:55:34.931107 1814267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 05:55:34.934048 1814267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 05:55:34.939540 1814267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 05:55:34.942404 1814267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 05:55:34.945299 1814267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 05:55:34.948678 1814267 config.go:182] Loaded profile config "pause-360536": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:55:34.948778 1814267 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 05:55:34.986279 1814267 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 05:55:34.986414 1814267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:55:35.080923 1814267 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:55:35.069954063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:55:35.081046 1814267 docker.go:319] overlay module found
	I1209 05:55:35.089439 1814267 out.go:179] * Using the docker driver based on user configuration
	I1209 05:55:35.092446 1814267 start.go:309] selected driver: docker
	I1209 05:55:35.092492 1814267 start.go:927] validating driver "docker" against <nil>
	I1209 05:55:35.092507 1814267 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 05:55:35.093261 1814267 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:55:35.169508 1814267 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 05:55:35.156258511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:55:35.169671 1814267 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 05:55:35.169901 1814267 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 05:55:35.173345 1814267 out.go:179] * Using Docker driver with root privileges
	I1209 05:55:35.176176 1814267 cni.go:84] Creating CNI manager for "kindnet"
	I1209 05:55:35.176211 1814267 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 05:55:35.176314 1814267 start.go:353] cluster config:
	{Name:kindnet-880308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-880308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 05:55:35.179657 1814267 out.go:179] * Starting "kindnet-880308" primary control-plane node in "kindnet-880308" cluster
	I1209 05:55:35.182508 1814267 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 05:55:35.185716 1814267 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
	I1209 05:55:35.188741 1814267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 05:55:35.188795 1814267 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 05:55:35.188809 1814267 cache.go:65] Caching tarball of preloaded images
	I1209 05:55:35.188815 1814267 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 05:55:35.188910 1814267 preload.go:238] Found /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1209 05:55:35.188921 1814267 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1209 05:55:35.189039 1814267 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kindnet-880308/config.json ...
	I1209 05:55:35.189056 1814267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kindnet-880308/config.json: {Name:mk48b96252091c2720edf7c8581b6d50adc350a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 05:55:35.217418 1814267 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 05:55:35.217440 1814267 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
	I1209 05:55:35.217455 1814267 cache.go:243] Successfully downloaded all kic artifacts
	I1209 05:55:35.217486 1814267 start.go:360] acquireMachinesLock for kindnet-880308: {Name:mk7ba07a8c4d7a755a13f7130e368d2585b89d94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 05:55:35.217587 1814267 start.go:364] duration metric: took 84.268µs to acquireMachinesLock for "kindnet-880308"
	I1209 05:55:35.217611 1814267 start.go:93] Provisioning new machine with config: &{Name:kindnet-880308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-880308 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 05:55:35.217685 1814267 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.819962942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.845195037Z" level=info msg="Starting container: e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76" id=d3676d51-6f51-40f2-aa03-f1a0ae0f3613 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.873790647Z" level=info msg="Started container" PID=2329 containerID=900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1 description=kube-system/kube-apiserver-pause-360536/kube-apiserver id=659c0e8b-8475-427f-976e-2bfc432979b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6df2c4a7f1905addee280a0c670291c7a2b470862f8fb82bdec74afd10e86705
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.883054228Z" level=info msg="Created container 9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6: kube-system/kindnet-k2bj9/kindnet-cni" id=3005728b-185d-4610-a443-c739a7a17648 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.883532428Z" level=info msg="Started container" PID=2341 containerID=e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76 description=kube-system/kube-scheduler-pause-360536/kube-scheduler id=d3676d51-6f51-40f2-aa03-f1a0ae0f3613 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4c3ec79a4f58c6b1d6da060320f462421c7eb23f03483883a805fad7fe6446e
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.88796971Z" level=info msg="Starting container: 9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6" id=24ba51e1-4c88-40d5-99e2-5c0294d43592 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.89616544Z" level=info msg="Started container" PID=2356 containerID=9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6 description=kube-system/kindnet-k2bj9/kindnet-cni id=24ba51e1-4c88-40d5-99e2-5c0294d43592 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9417e067c57cd7e5aaa13cdbea7c1fe985ad9a537bebd87288f425a704aef5b3
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.89733145Z" level=info msg="Created container 712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95: kube-system/kube-proxy-c64ck/kube-proxy" id=f4d90bd3-3034-4e70-92b2-ea0374f8ed9f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.898163032Z" level=info msg="Starting container: 712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95" id=98e3f0b7-e4f7-4380-b160-97bbfead89e4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 09 05:55:13 pause-360536 crio[2105]: time="2025-12-09T05:55:13.907888854Z" level=info msg="Started container" PID=2372 containerID=712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95 description=kube-system/kube-proxy-c64ck/kube-proxy id=98e3f0b7-e4f7-4380-b160-97bbfead89e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=76df82f24f4c054c3d247254330d249186252d6c39e72623229a3808d5f2d7b4
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.36798651Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.371816089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.371974926Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.372063985Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.377662715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.377702814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.377730876Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.380861543Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.380896473Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.380919447Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.384529947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.384978723Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.385061637Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.392586467Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 09 05:55:24 pause-360536 crio[2105]: time="2025-12-09T05:55:24.392769255Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	712b8e299e0d0       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   24 seconds ago       Running             kube-proxy                1                   76df82f24f4c0       kube-proxy-c64ck                       kube-system
	9772e51b9b6c5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   9417e067c57cd       kindnet-k2bj9                          kube-system
	e7467cb13242f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   24 seconds ago       Running             kube-scheduler            1                   e4c3ec79a4f58       kube-scheduler-pause-360536            kube-system
	3d49925b5eff2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   c9b47f4981d83       kube-controller-manager-pause-360536   kube-system
	900df275bbf8f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   6df2c4a7f1905       kube-apiserver-pause-360536            kube-system
	99893bf59b705       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   25 seconds ago       Running             etcd                      1                   b09b77df05d45       etcd-pause-360536                      kube-system
	2c3e701aebc8f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   4b7285ef1ca7a       coredns-66bc5c9577-z2ccv               kube-system
	f8b656cd67507       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   4b7285ef1ca7a       coredns-66bc5c9577-z2ccv               kube-system
	dbc988236c83b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9417e067c57cd       kindnet-k2bj9                          kube-system
	4ac4f4354ea8a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   76df82f24f4c0       kube-proxy-c64ck                       kube-system
	bed22ccb7174d       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   c9b47f4981d83       kube-controller-manager-pause-360536   kube-system
	0ad5fb714599b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   b09b77df05d45       etcd-pause-360536                      kube-system
	ce2dba3551b99       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   e4c3ec79a4f58       kube-scheduler-pause-360536            kube-system
	09de1532a88e8       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   6df2c4a7f1905       kube-apiserver-pause-360536            kube-system
	
	
	==> coredns [2c3e701aebc8fd7c9cacf4d34b3b6b1278c4671dcdaaffb6a19b9c2a9760602f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34316 - 46814 "HINFO IN 4234871655145286167.8698256232489913746. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027834306s
	
	
	==> coredns [f8b656cd67507995d80e225eb30dd7eac6852d52de0d039d7c73b9215073240a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57922 - 7641 "HINFO IN 1582322749603180769.2648909561381567358. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01520181s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-360536
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-360536
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
	                    minikube.k8s.io/name=pause-360536
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_09T05_54_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Dec 2025 05:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-360536
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Dec 2025 05:55:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Dec 2025 05:54:59 +0000   Tue, 09 Dec 2025 05:54:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-360536
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23f1bd729e908485546e733d693697cd
	  System UUID:                11f44272-664a-4239-937b-bd37f60e1949
	  Boot ID:                    3c42bf6f-64e9-4298-a947-b5a2e6063f1e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z2ccv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-360536                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         87s
	  kube-system                 kindnet-k2bj9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      82s
	  kube-system                 kube-apiserver-pause-360536             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-360536    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-c64ck                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-360536             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Normal   NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node pause-360536 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node pause-360536 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x8 over 98s)  kubelet          Node pause-360536 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s                kubelet          Node pause-360536 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s                kubelet          Node pause-360536 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s                kubelet          Node pause-360536 status is now: NodeHasSufficientPID
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           83s                node-controller  Node pause-360536 event: Registered Node pause-360536 in Controller
	  Normal   NodeReady                39s                kubelet          Node pause-360536 status is now: NodeReady
	  Warning  ContainerGCFailed        27s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           17s                node-controller  Node pause-360536 event: Registered Node pause-360536 in Controller
	
	
	==> dmesg <==
	[ +51.869899] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:17] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:19] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:23] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:24] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:25] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:26] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:27] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:29] overlayfs: idmapped layers are currently not supported
	[ +17.202326] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:30] overlayfs: idmapped layers are currently not supported
	[ +45.070414] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:31] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:32] overlayfs: idmapped layers are currently not supported
	[ +26.464722] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:33] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:34] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:36] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:38] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:39] overlayfs: idmapped layers are currently not supported
	[  +3.009285] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:40] overlayfs: idmapped layers are currently not supported
	[ +36.331905] overlayfs: idmapped layers are currently not supported
	[Dec 9 05:53] overlayfs: idmapped layers are currently not supported
	[  +0.201178] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0ad5fb714599bb552554da63005578d1324d21b770795097c0345f14a0df959b] <==
	{"level":"warn","ts":"2025-12-09T05:54:06.176563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.202012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.229606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.272805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:06.412341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:54:16.695723Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.185666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-12-09T05:54:16.711694Z","caller":"traceutil/trace.go:172","msg":"trace[1732159273] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:1; response_revision:355; }","duration":"120.164452ms","start":"2025-12-09T05:54:16.591511Z","end":"2025-12-09T05:54:16.711675Z","steps":["trace[1732159273] 'agreement among raft nodes before linearized reading'  (duration: 103.967087ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-09T05:55:05.332379Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-09T05:55:05.332440Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-360536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-09T05:55:05.332530Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T05:55:05.332590Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-09T05:55:05.615337Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T05:55:05.615399Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-09T05:55:05.615502Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-09T05:55:05.615515Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615746Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615781Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T05:55:05.615789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615832Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-09T05:55:05.615840Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-09T05:55:05.615846Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T05:55:05.618458Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-09T05:55:05.618527Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-09T05:55:05.618552Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-09T05:55:05.618559Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-360536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [99893bf59b7055a42e8a3f852c218a2a871130283bcb5b180e6d8773a8a89cff] <==
	{"level":"warn","ts":"2025-12-09T05:55:16.679138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.702220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.725363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.739353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.758238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.788158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.799797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.813327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.875090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.882585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.906499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.927470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.944914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.966643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:16.989781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.001317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.022110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.037438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.054761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.075407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.104118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.124157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.149340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.166755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-09T05:55:17.246218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41668","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:55:39 up 10:37,  0 user,  load average: 2.48, 2.09, 1.97
	Linux pause-360536 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9772e51b9b6c53ba4dff21be0cd170fdf5aaadcb89b56ddda43a3ddf5eef57e6] <==
	I1209 05:55:14.154346       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 05:55:14.154565       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 05:55:14.154770       1 main.go:148] setting mtu 1500 for CNI 
	I1209 05:55:14.154788       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 05:55:14.154801       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T05:55:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 05:55:14.393483       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 05:55:14.393580       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 05:55:14.393616       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 05:55:14.426724       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1209 05:55:18.196605       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 05:55:18.196643       1 metrics.go:72] Registering metrics
	I1209 05:55:18.196711       1 controller.go:711] "Syncing nftables rules"
	I1209 05:55:24.367548       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 05:55:24.367699       1 main.go:301] handling current node
	I1209 05:55:34.369124       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 05:55:34.369187       1 main.go:301] handling current node
	
	
	==> kindnet [dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144] <==
	I1209 05:54:19.529873       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1209 05:54:19.530297       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1209 05:54:19.530492       1 main.go:148] setting mtu 1500 for CNI 
	I1209 05:54:19.530556       1 main.go:178] kindnetd IP family: "ipv4"
	I1209 05:54:19.530638       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-09T05:54:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1209 05:54:19.823787       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1209 05:54:19.823813       1 controller.go:381] "Waiting for informer caches to sync"
	I1209 05:54:19.823822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1209 05:54:19.824569       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1209 05:54:49.824253       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1209 05:54:49.824379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1209 05:54:49.824473       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1209 05:54:49.824546       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1209 05:54:51.224508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1209 05:54:51.224645       1 metrics.go:72] Registering metrics
	I1209 05:54:51.224830       1 controller.go:711] "Syncing nftables rules"
	I1209 05:54:59.827681       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1209 05:54:59.827733       1 main.go:301] handling current node
	
	
	==> kube-apiserver [09de1532a88e8f8b72157dc2ccf11c063c29511bd00d34eaf7f74ba8870c0f63] <==
	W1209 05:55:05.365241       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.365798       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.365990       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.366672       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.366906       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.367150       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.367381       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368020       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368103       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368147       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368182       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368216       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368256       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368295       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.368332       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369042       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369141       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369182       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369462       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369505       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369801       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.369856       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.370158       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1209 05:55:05.370233       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [900df275bbf8f64c19d38b94f9cde64f7087cccdcf844c808580d2b27164b4a1] <==
	I1209 05:55:18.036884       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1209 05:55:18.049962       1 aggregator.go:171] initial CRD sync complete...
	I1209 05:55:18.049984       1 autoregister_controller.go:144] Starting autoregister controller
	I1209 05:55:18.049992       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 05:55:18.050000       1 cache.go:39] Caches are synced for autoregister controller
	I1209 05:55:18.056009       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1209 05:55:18.074742       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1209 05:55:18.074769       1 policy_source.go:240] refreshing policies
	I1209 05:55:18.084578       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 05:55:18.091913       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1209 05:55:18.092017       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1209 05:55:18.092176       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 05:55:18.104538       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1209 05:55:18.104860       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1209 05:55:18.104912       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1209 05:55:18.105566       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1209 05:55:18.162734       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1209 05:55:18.162903       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1209 05:55:18.194744       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 05:55:18.809153       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 05:55:19.977462       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1209 05:55:21.562429       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1209 05:55:21.630506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1209 05:55:21.730794       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 05:55:21.795521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [3d49925b5eff2c3fa79ae102703805c6c5ecb00d2bf50c997425bb41b782fedf] <==
	I1209 05:55:21.499540       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1209 05:55:21.502930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 05:55:21.506228       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1209 05:55:21.509437       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 05:55:21.512769       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1209 05:55:21.517898       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1209 05:55:21.517983       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 05:55:21.518011       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 05:55:21.518016       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 05:55:21.518021       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 05:55:21.520813       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 05:55:21.522064       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1209 05:55:21.522862       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 05:55:21.526347       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 05:55:21.532530       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 05:55:21.532638       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1209 05:55:21.532678       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 05:55:21.533303       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1209 05:55:21.533373       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1209 05:55:21.535776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1209 05:55:21.539244       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 05:55:21.540810       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1209 05:55:21.545129       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 05:55:21.546877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 05:55:21.551587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-controller-manager [bed22ccb7174d8573b035c5745652a3502746577e2e3a33bcf8c0809160eb6c7] <==
	I1209 05:54:15.643529       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1209 05:54:15.643598       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1209 05:54:15.646464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1209 05:54:15.649744       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1209 05:54:15.649801       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1209 05:54:15.653271       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1209 05:54:15.654809       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1209 05:54:15.662872       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1209 05:54:15.662921       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1209 05:54:15.662951       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 05:54:15.662956       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1209 05:54:15.662961       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1209 05:54:15.669266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 05:54:15.675025       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-360536" podCIDRs=["10.244.0.0/24"]
	I1209 05:54:15.675081       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1209 05:54:15.684357       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1209 05:54:15.684619       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1209 05:54:15.684912       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1209 05:54:15.685125       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1209 05:54:15.686230       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1209 05:54:15.686369       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1209 05:54:15.689290       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1209 05:54:15.689341       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1209 05:54:15.701037       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1209 05:55:00.639625       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74] <==
	I1209 05:54:19.481991       1 server_linux.go:53] "Using iptables proxy"
	I1209 05:54:19.580565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 05:54:19.687372       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 05:54:19.687441       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1209 05:54:19.687537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 05:54:19.710354       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 05:54:19.710486       1 server_linux.go:132] "Using iptables Proxier"
	I1209 05:54:19.714931       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 05:54:19.715327       1 server.go:527] "Version info" version="v1.34.2"
	I1209 05:54:19.715400       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 05:54:19.720882       1 config.go:200] "Starting service config controller"
	I1209 05:54:19.720908       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 05:54:19.720929       1 config.go:106] "Starting endpoint slice config controller"
	I1209 05:54:19.720934       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 05:54:19.720944       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 05:54:19.720948       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 05:54:19.721652       1 config.go:309] "Starting node config controller"
	I1209 05:54:19.721681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 05:54:19.721688       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 05:54:19.821390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 05:54:19.821392       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 05:54:19.821481       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [712b8e299e0d00a264d748cd4d3f07ee466c6e9f18877c2a880b2ab92bf83c95] <==
	I1209 05:55:15.562494       1 server_linux.go:53] "Using iptables proxy"
	I1209 05:55:16.962666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1209 05:55:18.264302       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1209 05:55:18.264434       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1209 05:55:18.264546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 05:55:18.304004       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 05:55:18.304134       1 server_linux.go:132] "Using iptables Proxier"
	I1209 05:55:18.324381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 05:55:18.324755       1 server.go:527] "Version info" version="v1.34.2"
	I1209 05:55:18.325283       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 05:55:18.330785       1 config.go:200] "Starting service config controller"
	I1209 05:55:18.330875       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1209 05:55:18.330922       1 config.go:106] "Starting endpoint slice config controller"
	I1209 05:55:18.330990       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1209 05:55:18.331028       1 config.go:403] "Starting serviceCIDR config controller"
	I1209 05:55:18.331076       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1209 05:55:18.333153       1 config.go:309] "Starting node config controller"
	I1209 05:55:18.333783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1209 05:55:18.333847       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1209 05:55:18.431983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1209 05:55:18.431992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1209 05:55:18.432047       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ce2dba3551b996a64989bc788883d90b88da5c42ffb10f2cabc3d025a51486ef] <==
	E1209 05:54:07.966166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1209 05:54:07.966204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 05:54:07.966284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 05:54:07.966323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1209 05:54:07.966354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1209 05:54:07.966393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 05:54:07.966432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1209 05:54:07.966469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 05:54:08.780359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1209 05:54:08.875685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1209 05:54:08.886854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1209 05:54:08.972156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1209 05:54:08.987988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1209 05:54:09.019046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1209 05:54:09.043124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1209 05:54:09.054066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1209 05:54:09.091564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1209 05:54:09.133252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1209 05:54:09.153172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1209 05:54:11.855876       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 05:55:05.326675       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1209 05:55:05.326706       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1209 05:55:05.327759       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1209 05:55:05.327817       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1209 05:55:05.327833       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e7467cb13242fd4298ef6b94d6779d88e592279e8ce4e8bdaa7e3f1524b1df76] <==
	I1209 05:55:15.485328       1 serving.go:386] Generated self-signed cert in-memory
	W1209 05:55:17.968231       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 05:55:17.968374       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 05:55:17.968414       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 05:55:17.968444       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 05:55:18.083861       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1209 05:55:18.083956       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 05:55:18.100130       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1209 05:55:18.101937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 05:55:18.106857       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 05:55:18.101961       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 05:55:18.210860       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.449273    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57b1e1807fd819fa7ad5ea62934d0125" pod="kube-system/kube-apiserver-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.449728    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba2558a3cdec79a8a34617c6d55e31ae" pod="kube-system/kube-scheduler-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.450099    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3aaad2ee6567158e025c9d08ddef2553" pod="kube-system/etcd-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.450414    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3920af4bed5f906a5428f98a986f2ee6" pod="kube-system/kube-controller-manager-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: I1209 05:55:13.576149    1341 scope.go:117] "RemoveContainer" containerID="dbc988236c83b9792452edc69aa48d068d2c02e4e2a16e1ac5e9317bd8e7c144"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.578985    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57b1e1807fd819fa7ad5ea62934d0125" pod="kube-system/kube-apiserver-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.579779    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba2558a3cdec79a8a34617c6d55e31ae" pod="kube-system/kube-scheduler-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.581010    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3aaad2ee6567158e025c9d08ddef2553" pod="kube-system/etcd-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.581619    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3920af4bed5f906a5428f98a986f2ee6" pod="kube-system/kube-controller-manager-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.582747    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-k2bj9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc8996cb-ab02-4cd5-b339-c76a346e299e" pod="kube-system/kindnet-k2bj9"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.584280    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z2ccv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="576ed217-b1ef-4ae7-9488-aac220856947" pod="kube-system/coredns-66bc5c9577-z2ccv"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.605208    1341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-360536?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="1.6s"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: I1209 05:55:13.704158    1341 scope.go:117] "RemoveContainer" containerID="4ac4f4354ea8aeaa8d72fb1db132bac2c9601f0cd38da6ce7cc2fe19b5758e74"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.704893    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-k2bj9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cc8996cb-ab02-4cd5-b339-c76a346e299e" pod="kube-system/kindnet-k2bj9"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.705786    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z2ccv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="576ed217-b1ef-4ae7-9488-aac220856947" pod="kube-system/coredns-66bc5c9577-z2ccv"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.706187    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="57b1e1807fd819fa7ad5ea62934d0125" pod="kube-system/kube-apiserver-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.706496    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ba2558a3cdec79a8a34617c6d55e31ae" pod="kube-system/kube-scheduler-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.706838    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3aaad2ee6567158e025c9d08ddef2553" pod="kube-system/etcd-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.707252    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-360536\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3920af4bed5f906a5428f98a986f2ee6" pod="kube-system/kube-controller-manager-pause-360536"
	Dec 09 05:55:13 pause-360536 kubelet[1341]: E1209 05:55:13.707639    1341 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c64ck\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c41f125e-2794-4b73-9e6f-853ef6317344" pod="kube-system/kube-proxy-c64ck"
	Dec 09 05:55:21 pause-360536 kubelet[1341]: W1209 05:55:21.365348    1341 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 09 05:55:31 pause-360536 kubelet[1341]: W1209 05:55:31.380907    1341 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 09 05:55:32 pause-360536 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 09 05:55:32 pause-360536 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 09 05:55:32 pause-360536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-360536 -n pause-360536
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-360536 -n pause-360536: exit status 2 (372.094583ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-360536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7200.077s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-089447 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1209 06:11:47.265980 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/calico-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:01.101602 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/kindnet-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:05.765543 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/old-k8s-version-975021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:33.466645 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/old-k8s-version-975021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.587149 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.593586 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.605085 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.626543 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.668003 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.749520 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:58.911102 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:59.232776 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:12:59.874695 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:01.156496 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:03.718019 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:08.839768 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:19.081342 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:33.862438 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/enable-default-cni-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:36.336851 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/custom-flannel-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:13:39.562816 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:14:20.524856 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:14:22.262345 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:14:31.980048 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:15:02.768951 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/auto-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:15:17.798343 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/flannel-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:15:36.456524 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/bridge-880308/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 06:15:42.447056 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/default-k8s-diff-port-380356/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (37m34s)
		TestNetworkPlugins/group (14m36s)
		TestStartStop (22m20s)
		TestStartStop/group/newest-cni (6m16s)
		TestStartStop/group/newest-cni/serial (6m16s)
		TestStartStop/group/newest-cni/serial/FirstStart (6m16s)
		TestStartStop/group/no-preload (14m36s)
		TestStartStop/group/no-preload/serial (14m36s)
		TestStartStop/group/no-preload/serial/SecondStart (4m17s)

                                                
                                                
goroutine 4997 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40002e4540, 0x400078dbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40006b4060, {0x534c580, 0x2c, 0x2c}, {0x400078dd08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x4000699b80)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x4000699b80)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 4922 [syscall, 6 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x40000d7b48, 0x4, 0x40014650e0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40000d7ca8?, 0x1929a0?, 0xffffc8ff419f?, 0x0?, 0x40023eb2b0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40004fe5c0)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40000d7c78?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001f68600)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001f68600)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400131e700, 0x4001f68600)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateFirstStart({0x36e5f38?, 0x400022d420?}, 0x400131e700, {0x40022d63d8?, 0x23928aa9e75f?}, {0x191a52dd?, 0x191a52dd00161e84?}, {0x6937bd1c?, 0x40000d7f58?}, {0x40015de300?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:184 +0x88
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x400131e700?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x400131e700, 0x4001442300)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4921
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4996 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0x4001f68480, 0x4004f22bd0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 4993
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 178 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 177 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40013cd740, 0x40013edf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x8f?, 0x40013cd740, 0x40013cd788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x0?, 0x40013cd750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f3bf0?, 0x40001bc080?, 0x400025a900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4307 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015979e0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4305
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3702 [chan receive, 22 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f47aa0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3697
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4310 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4004f55210, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004f55200)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015979e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400027aee0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x4001317f38, {0x369de40, 0x400151c6f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3bf0?, {0x369de40?, 0x400151c6f0?}, 0xa0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40023841c0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4307
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4117 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001687ce0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4112
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4341 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x4001567080?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4364
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 148 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4000764c60, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 169
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 147 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x400025a900?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 169
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 176 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4004f543d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004f543c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000764c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40012c8770?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x6ee?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40012e9f38, {0x369de40, 0x40012e5080}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013d07a8?, {0x369de40?, 0x40012e5080?}, 0xd0?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40012dbdd0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3845 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40013cf740, 0x40013cf788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x10?, 0x40013cf740, 0x40013cf788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4004edea18?, 0x40012dba50?, 0x40023f82a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3829
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4102 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4101
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 588 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff5c33bc00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400157e480?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400157e480)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400157e480)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40003b0bc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40003b0bc0)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004d2100, {0x36d38e0, 0x40003b0bc0})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004d2100)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 586
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3171 [chan receive, 38 minutes]:
testing.(*T).Run(0x40002e5880, {0x296d8df?, 0x218b29ac07e5?}, 0x40018929a8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40002e5880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40002e5880, 0x339bac8)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4525 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40022bbce0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4342 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40007657a0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4364
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1219 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff5c33b200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40017ea080?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40017ea080)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40017ea080)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001790d80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001790d80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004d3a00, {0x36d38e0, 0x4001790d80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004d3a00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1217
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4524 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x40016a48c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1098 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x40013a1200)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1096
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 1470 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x400157ac40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1469
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3844 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40004fe7d0, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40004fe7c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001686f00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40012c8150?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x40013ce6b8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40000d6f38, {0x369de40, 0x40018cc750}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013ce788?, {0x369de40?, 0x40018cc750?}, 0x10?, 0x40013ce7a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40012dbad0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3829
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4312 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4311
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3877 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4000763d10, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000763d00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015ecd80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001446e70?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x4001307ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40012eef38, {0x369de40, 0x4001507890}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001307fa8?, {0x369de40?, 0x4001507890?}, 0xb0?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400169a000, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3874
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3828 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x40016a48c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3827
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4100 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40004feed0, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40004feec0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40023257a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40016942a0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x4001654ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x400166af38, {0x369de40, 0x400151cf00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001654fa8?, {0x369de40?, 0x400151cf00?}, 0xf0?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40012db7e0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4097
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4494 [chan receive, 4 minutes]:
testing.(*T).Run(0x40014b5c00, {0x297a9f0?, 0x40000006ee?}, 0x4001940200)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40014b5c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40014b5c00, 0x4001442080)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3657
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1099 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x40013a1200)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1096
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 4311 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x4001b4ef40, 0x4001b4ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x0?, 0x4001b4ef40, 0x4001b4ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x36e5f38?, 0x4002153dc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4002153ce0?, 0x0?, 0x400218b200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4307
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4921 [chan receive, 6 minutes]:
testing.(*T).Run(0x400131e380, {0x29788c3?, 0x40000006ee?}, 0x4001442300)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x400131e380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x400131e380, 0x4001442280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3655
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1022 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400170c480, 0x400170e460)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 713
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 985 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001519c80, 0x4004f23a40)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 984
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 933 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40013af200, 0x400154a070)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 932
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3829 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001686f00, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3827
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3655 [chan receive, 6 minutes]:
testing.(*T).Run(0x400157ae00, {0x296ed51?, 0x0?}, 0x4001442280)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400157ae00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400157ae00, 0x4001790080)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3653
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3693 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x4001650740, 0x4001650788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x88?, 0x4001650740, 0x4001650788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x36e5f38?, 0x40012c9ce0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400025aa80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3702
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4122 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4121
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4347 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4346
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3846 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3845
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1939 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x400068b080, 0x40018405b0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1386
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4795 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4794
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4121 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x4001b4a740, 0x4001319f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x58?, 0x4001b4a740, 0x4001b4a788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400025a480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4117
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3237 [chan receive, 22 minutes]:
testing.(*T).Run(0x4001362700, {0x296d8df?, 0x40000d8f58?}, 0x339bcf8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001362700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001362700, 0x339bb10)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4994 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0xffff5c33b800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400200c720?, 0x400153f33a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400200c720, {0x400153f33a, 0x4c6, 0x4c6})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4001bd2118, {0x400153f33a?, 0x4001b4ad68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40018cc8a0, {0x369c218, 0x40000a6240})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369c400, 0x40018cc8a0}, {0x369c218, 0x40000a6240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4001bd2118?, {0x369c400, 0x40018cc8a0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4001bd2118, {0x369c400, 0x40018cc8a0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369c400, 0x40018cc8a0}, {0x369c298, 0x4001bd2118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40013628c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4993
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 1449 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400068e350, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400068e340)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006fb260)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002679d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40012edf38, {0x369de40, 0x4001a97d10}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3bf0?, {0x369de40?, 0x4001a97d10?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40017975d0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1471
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1471 [chan receive, 81 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006fb260, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1469
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3701 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x400025a600?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3697
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4116 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x400171cc40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4112
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 704 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x4004f54210, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004f54200)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400138e9c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400023e150?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40012eaf38, {0x369de40, 0x40024e3e90}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3bf0?, {0x369de40?, 0x40024e3e90?}, 0x10?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40017f3400, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 785 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x4001306740, 0x40013f1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x38?, 0x4001306740, 0x4001306788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x40014bc480?, 0x40024a5a40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40024856c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1450 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40013ca740, 0x40013eff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x18?, 0x40013ca740, 0x40013ca788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x4001875800?, 0x4000239e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400025b380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1471
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1837 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x400068b500, 0x40016948c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1836
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 786 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 785
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3692 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40024bed50, 0x14)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40024bed40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f47aa0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4004f22bd0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x4001b4e6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40012eff38, {0x369de40, 0x4001674f60}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001b4e7a8?, {0x369de40?, 0x4001674f60?}, 0x60?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400180a9c0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3702
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 779 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x40014b56c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 778
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 780 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400138e9c0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 778
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3694 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3693
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4120 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40017919d0, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40017919c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001687ce0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400170e930?, 0x6fc23ac00?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x22ee620?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x4001664f38, {0x369de40, 0x4001674c00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40013cdfa8?, {0x369de40?, 0x4001674c00?}, 0xc8?, 0x40002b2850?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001534d80, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4117
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3335 [chan receive, 14 minutes]:
testing.(*testState).waitParallel(0x400068d900)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1906 +0x4c4
testing.tRunner(0x400157aa80, 0x40018929a8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3171
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1451 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1450
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3657 [chan receive, 14 minutes]:
testing.(*T).Run(0x400157bc00, {0x296ed51?, 0x0?}, 0x4001442080)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x400157bc00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x400157bc00, 0x4001790100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3653
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3653 [chan receive, 12 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400157a700, 0x339bcf8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3237
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4995 [IO wait]:
internal/poll.runtime_pollWait(0xffff5beaa200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400200c7e0?, 0x4001ad56bb?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400200c7e0, {0x4001ad56bb, 0x2945, 0x2945})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4001bd2130, {0x4001ad56bb?, 0x4001b4a568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40018cc8d0, {0x369c218, 0x40000a6268})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369c400, 0x40018cc8d0}, {0x369c218, 0x40000a6268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4001bd2130?, {0x369c400, 0x40018cc8d0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4001bd2130, {0x369c400, 0x40018cc8d0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369c400, 0x40018cc8d0}, {0x369c298, 0x4001bd2130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400025a480?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4993
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4306 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x400171c000?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4305
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4101 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40013cd740, 0x400166bf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x40?, 0x40013cd740, 0x40013cd788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x36e5f38?, 0x40014c2150?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001c2e480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4097
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4528 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4004f54750, 0x12)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004f54740)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40022bbce0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400022fb20?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40016b6f38, {0x369de40, 0x40014f1650}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f3bf0?, {0x369de40?, 0x40014f1650?}, 0x40?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015db560, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4525
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1822 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x400025ac00, 0x4001447340)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1821
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4924 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0xffff5c33ba00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400200cc00?, 0x4001628368?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400200cc00, {0x4001628368, 0x7c98, 0x7c98})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a68b8, {0x4001628368?, 0x4001655d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012ab3b0, {0x369c218, 0x4001bd20c8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369c400, 0x40012ab3b0}, {0x369c218, 0x4001bd20c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a68b8?, {0x369c400, 0x40012ab3b0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a68b8, {0x369c400, 0x40012ab3b0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369c400, 0x40012ab3b0}, {0x369c298, 0x40000a68b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001c86480?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4922
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4993 [syscall, 4 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x13, 0x40013acb18, 0x4, 0x400164aab0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40013acc78?, 0x1929a0?, 0xffffc8ff419f?, 0x0?, 0x40001038c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40004fe400)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40013acc48?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001f68480)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001f68480)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40013628c0, 0x4001f68480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.validateSecondStart({0x36e5f38, 0x40002f0af0}, 0x40013628c0, {0x4001748270, 0x11}, {0xf2f75f7?, 0xf2f75f700161e84?}, {0x6937bd93?, 0x40013acf58?}, {0x40015de100?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0x90
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40013628c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40013628c0, 0x4001940200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 4494
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3873 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x40014b5c00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3872
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4345 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001818210, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001818200)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40007657a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4004f22620?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x40016556a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x40013b3f38, {0x369de40, 0x40016cb0e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369de40?, 0x40016cb0e0?}, 0x1?, 0x36e5f38?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001535b30, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4342
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4346 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40013d1f40, 0x40016baf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x40?, 0x40013d1f40, 0x40013d1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x40013d1fa8?, 0x400024dcc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40015e1680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4342
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4546 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4545
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4793 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001791150, 0x10)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001791140)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702540)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016874a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4004f22c40?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e62d0?, 0x40000823f0?}, 0x4001306ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e62d0, 0x40000823f0}, 0x4001667f38, {0x369de40, 0x400151cb40}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001306fa8?, {0x369de40?, 0x400151cb40?}, 0x70?, 0x4001c86000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40018392a0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4790
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4545 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40016b7f40, 0x40016b7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x20?, 0x40016b7f40, 0x40016b7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x4001307fa8?, 0x4004f09680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001942900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4525
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4097 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40023257a0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4060
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3874 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015ecd80, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3872
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4064 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x400157ac40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4060
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3878 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x40000a5f40, 0x40000a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x95?, 0x40000a5f40, 0x40000a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x0?, 0x40000a5f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f3bf0?, 0x40001bc080?, 0x40014b5c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3874
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3879 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3878
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4923 [IO wait, 6 minutes]:
internal/poll.runtime_pollWait(0xffff5bf71c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400200cae0?, 0x400145c2b7?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400200cae0, {0x400145c2b7, 0x549, 0x549})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6880, {0x400145c2b7?, 0x40000a5568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012ab320, {0x369c218, 0x4001bd20c0})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369c400, 0x40012ab320}, {0x369c218, 0x4001bd20c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6880?, {0x369c400, 0x40012ab320})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6880, {0x369c400, 0x40012ab320})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369c400, 0x40012ab320}, {0x369c298, 0x40000a6880}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001f68300?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 4922
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4794 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e62d0, 0x40000823f0}, 0x4001b4ff40, 0x4001b4ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e62d0, 0x40000823f0}, 0x28?, 0x4001b4ff40, 0x4001b4ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e62d0?, 0x40000823f0?}, 0x4001635b00?, 0x40004a6a00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001566600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4790
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4925 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0x4001f68600, 0x40018417a0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 4922
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4790 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016874a0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4804
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4789 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff040, {{0x36f3bf0, 0x40001bc080?}, 0x40022ce390?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4804
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                    

Test pass (239/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 34.89
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 28.44
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.1
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 16.13
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 140.49
40 TestAddons/serial/GCPAuth/Namespaces 0.21
41 TestAddons/serial/GCPAuth/FakeCredentials 8.91
57 TestAddons/StoppedEnableDisable 12.43
58 TestCertOptions 37.03
59 TestCertExpiration 254.32
61 TestForceSystemdFlag 46.93
62 TestForceSystemdEnv 35.02
67 TestErrorSpam/setup 31.28
68 TestErrorSpam/start 0.86
69 TestErrorSpam/status 1.09
70 TestErrorSpam/pause 7.32
71 TestErrorSpam/unpause 6.11
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 79.34
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 26.67
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.62
84 TestFunctional/serial/CacheCmd/cache/add_local 1.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.05
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 37.64
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.52
95 TestFunctional/serial/LogsFileCmd 1.52
96 TestFunctional/serial/InvalidService 4
98 TestFunctional/parallel/ConfigCmd 0.54
99 TestFunctional/parallel/DashboardCmd 8.83
100 TestFunctional/parallel/DryRun 0.96
101 TestFunctional/parallel/InternationalLanguage 0.3
102 TestFunctional/parallel/StatusCmd 1.3
106 TestFunctional/parallel/ServiceCmdConnect 8.58
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 20.97
110 TestFunctional/parallel/SSHCmd 0.71
111 TestFunctional/parallel/CpCmd 2.42
113 TestFunctional/parallel/FileSync 0.37
114 TestFunctional/parallel/CertSync 2.35
118 TestFunctional/parallel/NodeLabels 0.15
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
122 TestFunctional/parallel/License 0.21
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
136 TestFunctional/parallel/ProfileCmd/profile_list 0.45
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/any-port 7.05
139 TestFunctional/parallel/ServiceCmd/List 0.52
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
142 TestFunctional/parallel/ServiceCmd/Format 0.48
143 TestFunctional/parallel/ServiceCmd/URL 0.57
144 TestFunctional/parallel/MountCmd/specific-port 2.11
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.65
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 0.74
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
153 TestFunctional/parallel/ImageCommands/Setup 0.62
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.51
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.82
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.11
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.95
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.93
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.98
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.51
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.74
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.25
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.7
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.58
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.03
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.78
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.25
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.26
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.93
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.32
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.23
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.83
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.38
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.53
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.77
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.43
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 220.01
265 TestMultiControlPlane/serial/DeployApp 8.18
266 TestMultiControlPlane/serial/PingHostFromPods 1.48
267 TestMultiControlPlane/serial/AddWorkerNode 59.94
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
270 TestMultiControlPlane/serial/CopyFile 20.35
271 TestMultiControlPlane/serial/StopSecondaryNode 12.91
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 160.02
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.24
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.85
278 TestMultiControlPlane/serial/StopCluster 36.18
279 TestMultiControlPlane/serial/RestartCluster 89.35
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
281 TestMultiControlPlane/serial/AddSecondaryNode 94.2
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
287 TestJSONOutput/start/Command 52.38
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.85
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.26
312 TestKicCustomNetwork/create_custom_network 40.26
313 TestKicCustomNetwork/use_default_bridge_network 36.98
314 TestKicExistingNetwork 35.12
315 TestKicCustomSubnet 36.92
316 TestKicStaticIP 35.73
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 77.24
321 TestMountStart/serial/StartWithMountFirst 8.72
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 9.08
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.73
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.86
329 TestMountStart/serial/VerifyMountPostStop 0.3
332 TestMultiNode/serial/FreshStart2Nodes 139.43
333 TestMultiNode/serial/DeployApp2Nodes 5.98
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 59.32
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.72
338 TestMultiNode/serial/CopyFile 10.81
339 TestMultiNode/serial/StopNode 2.46
340 TestMultiNode/serial/StartAfterStop 9.02
341 TestMultiNode/serial/RestartKeepsNodes 76.05
342 TestMultiNode/serial/DeleteNode 5.75
343 TestMultiNode/serial/StopMultiNode 24.02
344 TestMultiNode/serial/RestartMultiNode 57.27
345 TestMultiNode/serial/ValidateNameConflict 36.27
350 TestPreload 119.51
352 TestScheduledStopUnix 109.88
355 TestInsufficientStorage 12.91
356 TestRunningBinaryUpgrade 302.45
359 TestMissingContainerUpgrade 110.4
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 42.75
363 TestNoKubernetes/serial/StartWithStopK8s 18
364 TestNoKubernetes/serial/Start 8.81
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
367 TestNoKubernetes/serial/ProfileList 0.71
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 93.38
381 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
382 TestStoppedBinaryUpgrade/Setup 1.9
383 TestStoppedBinaryUpgrade/Upgrade 303.94
384 TestStoppedBinaryUpgrade/MinikubeLogs 1.66
393 TestPause/serial/Start 95.01
397 TestPause/serial/SecondStartNoReconfiguration 28.64
x
+
TestDownloadOnly/v1.28.0/json-events (34.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-711071 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-711071 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (34.893255993s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (34.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1209 04:16:23.046686 1580521 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1209 04:16:23.046771 1580521 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-711071
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-711071: exit status 85 (89.293639ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-711071 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-711071 │ jenkins │ v1.37.0 │ 09 Dec 25 04:15 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:15:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:15:48.193752 1580526 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:15:48.193891 1580526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:15:48.193903 1580526 out.go:374] Setting ErrFile to fd 2...
	I1209 04:15:48.193908 1580526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:15:48.194183 1580526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	W1209 04:15:48.194322 1580526 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22081-1577059/.minikube/config/config.json: open /home/jenkins/minikube-integration/22081-1577059/.minikube/config/config.json: no such file or directory
	I1209 04:15:48.194795 1580526 out.go:368] Setting JSON to true
	I1209 04:15:48.195639 1580526 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32289,"bootTime":1765221460,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:15:48.195708 1580526 start.go:143] virtualization:  
	I1209 04:15:48.200966 1580526 out.go:99] [download-only-711071] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1209 04:15:48.201174 1580526 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 04:15:48.201299 1580526 notify.go:221] Checking for updates...
	I1209 04:15:48.205918 1580526 out.go:171] MINIKUBE_LOCATION=22081
	I1209 04:15:48.209503 1580526 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:15:48.212878 1580526 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:15:48.216136 1580526 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:15:48.219217 1580526 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 04:15:48.225301 1580526 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 04:15:48.225571 1580526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:15:48.249485 1580526 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:15:48.249623 1580526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:15:48.311401 1580526 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-09 04:15:48.302046498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:15:48.311515 1580526 docker.go:319] overlay module found
	I1209 04:15:48.314737 1580526 out.go:99] Using the docker driver based on user configuration
	I1209 04:15:48.314788 1580526 start.go:309] selected driver: docker
	I1209 04:15:48.314801 1580526 start.go:927] validating driver "docker" against <nil>
	I1209 04:15:48.314921 1580526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:15:48.374503 1580526 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-09 04:15:48.365465459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:15:48.374678 1580526 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:15:48.374970 1580526 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1209 04:15:48.375116 1580526 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 04:15:48.378413 1580526 out.go:171] Using Docker driver with root privileges
	I1209 04:15:48.381505 1580526 cni.go:84] Creating CNI manager for ""
	I1209 04:15:48.381598 1580526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:15:48.381612 1580526 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:15:48.381707 1580526 start.go:353] cluster config:
	{Name:download-only-711071 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-711071 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:15:48.384773 1580526 out.go:99] Starting "download-only-711071" primary control-plane node in "download-only-711071" cluster
	I1209 04:15:48.384802 1580526 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:15:48.387830 1580526 out.go:99] Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:15:48.387898 1580526 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 04:15:48.387961 1580526 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:15:48.407865 1580526 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:15:48.407885 1580526 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:15:48.408033 1580526 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 04:15:48.408134 1580526 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:15:48.439049 1580526 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:15:48.439080 1580526 cache.go:65] Caching tarball of preloaded images
	I1209 04:15:48.439253 1580526 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1209 04:15:48.442809 1580526 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1209 04:15:48.442844 1580526 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1209 04:15:48.530751 1580526 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1209 04:15:48.530956 1580526 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:15:53.866442 1580526 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	
	
	* The control-plane node download-only-711071 host does not exist
	  To start a cluster, run: "minikube start -p download-only-711071"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-711071
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (28.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-640851 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-640851 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (28.441019737s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (28.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1209 04:16:51.931073 1580521 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1209 04:16:51.931109 1580521 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-640851
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-640851: exit status 85 (102.13942ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-711071 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-711071 │ jenkins │ v1.37.0 │ 09 Dec 25 04:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ delete  │ -p download-only-711071                                                                                                                                                   │ download-only-711071 │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ start   │ -o=json --download-only -p download-only-640851 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-640851 │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:16:23
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:16:23.531100 1580716 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:16:23.531287 1580716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:16:23.531318 1580716 out.go:374] Setting ErrFile to fd 2...
	I1209 04:16:23.531339 1580716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:16:23.531635 1580716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:16:23.532060 1580716 out.go:368] Setting JSON to true
	I1209 04:16:23.532966 1580716 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32324,"bootTime":1765221460,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:16:23.533066 1580716 start.go:143] virtualization:  
	I1209 04:16:23.536528 1580716 out.go:99] [download-only-640851] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:16:23.536797 1580716 notify.go:221] Checking for updates...
	I1209 04:16:23.540569 1580716 out.go:171] MINIKUBE_LOCATION=22081
	I1209 04:16:23.544056 1580716 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:16:23.546977 1580716 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:16:23.549867 1580716 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:16:23.552724 1580716 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 04:16:23.558429 1580716 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 04:16:23.558716 1580716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:16:23.584024 1580716 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:16:23.584158 1580716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:16:23.645598 1580716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:16:23.636658691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:16:23.645732 1580716 docker.go:319] overlay module found
	I1209 04:16:23.648732 1580716 out.go:99] Using the docker driver based on user configuration
	I1209 04:16:23.648780 1580716 start.go:309] selected driver: docker
	I1209 04:16:23.648791 1580716 start.go:927] validating driver "docker" against <nil>
	I1209 04:16:23.648901 1580716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:16:23.712424 1580716 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:16:23.703403699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:16:23.712582 1580716 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:16:23.712881 1580716 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1209 04:16:23.713029 1580716 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 04:16:23.716040 1580716 out.go:171] Using Docker driver with root privileges
	I1209 04:16:23.719013 1580716 cni.go:84] Creating CNI manager for ""
	I1209 04:16:23.719087 1580716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:16:23.719109 1580716 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:16:23.719188 1580716 start.go:353] cluster config:
	{Name:download-only-640851 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-640851 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:16:23.722125 1580716 out.go:99] Starting "download-only-640851" primary control-plane node in "download-only-640851" cluster
	I1209 04:16:23.722148 1580716 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:16:23.724946 1580716 out.go:99] Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:16:23.724993 1580716 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:16:23.725158 1580716 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:16:23.744269 1580716 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:16:23.744300 1580716 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:16:23.744414 1580716 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 04:16:23.744436 1580716 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory, skipping pull
	I1209 04:16:23.744441 1580716 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in cache, skipping pull
	I1209 04:16:23.744449 1580716 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	I1209 04:16:23.775423 1580716 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1209 04:16:23.775460 1580716 cache.go:65] Caching tarball of preloaded images
	I1209 04:16:23.775646 1580716 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1209 04:16:23.778857 1580716 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1209 04:16:23.778895 1580716 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1209 04:16:23.855924 1580716 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1209 04:16:23.856014 1580716 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-640851 host does not exist
	  To start a cluster, run: "minikube start -p download-only-640851"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-640851
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (16.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-306472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-306472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.127061305s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (16.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1209 04:17:08.523721 1580521 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1209 04:17:08.523759 1580521 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-306472
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-306472: exit status 85 (89.231016ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-711071 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-711071 │ jenkins │ v1.37.0 │ 09 Dec 25 04:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ delete  │ -p download-only-711071                                                                                                                                                          │ download-only-711071 │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ start   │ -o=json --download-only -p download-only-640851 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-640851 │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ delete  │ -p download-only-640851                                                                                                                                                          │ download-only-640851 │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │ 09 Dec 25 04:16 UTC │
	│ start   │ -o=json --download-only -p download-only-306472 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-306472 │ jenkins │ v1.37.0 │ 09 Dec 25 04:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/09 04:16:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 04:16:52.443087 1580913 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:16:52.443538 1580913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:16:52.443601 1580913 out.go:374] Setting ErrFile to fd 2...
	I1209 04:16:52.443624 1580913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:16:52.443920 1580913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:16:52.444377 1580913 out.go:368] Setting JSON to true
	I1209 04:16:52.445252 1580913 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32353,"bootTime":1765221460,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:16:52.445363 1580913 start.go:143] virtualization:  
	I1209 04:16:52.448896 1580913 out.go:99] [download-only-306472] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:16:52.449147 1580913 notify.go:221] Checking for updates...
	I1209 04:16:52.452077 1580913 out.go:171] MINIKUBE_LOCATION=22081
	I1209 04:16:52.455180 1580913 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:16:52.458151 1580913 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:16:52.460948 1580913 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:16:52.463993 1580913 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1209 04:16:52.469790 1580913 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 04:16:52.470086 1580913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:16:52.496163 1580913 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:16:52.496304 1580913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:16:52.554736 1580913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:16:52.545278833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:16:52.554845 1580913 docker.go:319] overlay module found
	I1209 04:16:52.557798 1580913 out.go:99] Using the docker driver based on user configuration
	I1209 04:16:52.557832 1580913 start.go:309] selected driver: docker
	I1209 04:16:52.557838 1580913 start.go:927] validating driver "docker" against <nil>
	I1209 04:16:52.557934 1580913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:16:52.611844 1580913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:16:52.602172978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:16:52.611998 1580913 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1209 04:16:52.612265 1580913 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1209 04:16:52.612438 1580913 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 04:16:52.615615 1580913 out.go:171] Using Docker driver with root privileges
	I1209 04:16:52.618414 1580913 cni.go:84] Creating CNI manager for ""
	I1209 04:16:52.618487 1580913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 04:16:52.618501 1580913 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 04:16:52.618734 1580913 start.go:353] cluster config:
	{Name:download-only-306472 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-306472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:16:52.621810 1580913 out.go:99] Starting "download-only-306472" primary control-plane node in "download-only-306472" cluster
	I1209 04:16:52.621827 1580913 cache.go:134] Beginning downloading kic base image for docker with crio
	I1209 04:16:52.624452 1580913 out.go:99] Pulling base image v0.0.48-1765184860-22066 ...
	I1209 04:16:52.624490 1580913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:16:52.624609 1580913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
	I1209 04:16:52.644481 1580913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
	I1209 04:16:52.644506 1580913 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c to local cache
	I1209 04:16:52.644583 1580913 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory
	I1209 04:16:52.644606 1580913 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local cache directory, skipping pull
	I1209 04:16:52.644610 1580913 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in cache, skipping pull
	I1209 04:16:52.644618 1580913 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c as a tarball
	I1209 04:16:52.679171 1580913 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1209 04:16:52.679199 1580913 cache.go:65] Caching tarball of preloaded images
	I1209 04:16:52.679365 1580913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1209 04:16:52.682621 1580913 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1209 04:16:52.682662 1580913 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1209 04:16:52.768848 1580913 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1209 04:16:52.768902 1580913 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-306472 host does not exist
	  To start a cluster, run: "minikube start -p download-only-306472"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-306472
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 04:17:09.831325 1580521 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-878510 --alsologtostderr --binary-mirror http://127.0.0.1:38315 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-878510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-878510
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1060: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-377526
addons_test.go:1060: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-377526: exit status 85 (76.091488ms)

                                                
                                                
-- stdout --
	* Profile "addons-377526" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377526"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1071: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-377526
addons_test.go:1071: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-377526: exit status 85 (82.666627ms)

                                                
                                                
-- stdout --
	* Profile "addons-377526" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377526"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (140.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:113: (dbg) Run:  out/minikube-linux-arm64 start -p addons-377526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:113: (dbg) Done: out/minikube-linux-arm64 start -p addons-377526 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.485753536s)
--- PASS: TestAddons/Setup (140.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:690: (dbg) Run:  kubectl --context addons-377526 create ns new-namespace
addons_test.go:704: (dbg) Run:  kubectl --context addons-377526 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:735: (dbg) Run:  kubectl --context addons-377526 create -f testdata/busybox.yaml
addons_test.go:742: (dbg) Run:  kubectl --context addons-377526 create sa gcp-auth-test
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [57268e01-0d57-4108-a966-2bf34593e140] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [57268e01-0d57-4108-a966-2bf34593e140] Running
addons_test.go:748: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.01151042s
addons_test.go:754: (dbg) Run:  kubectl --context addons-377526 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:766: (dbg) Run:  kubectl --context addons-377526 describe sa gcp-auth-test
addons_test.go:780: (dbg) Run:  kubectl --context addons-377526 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:804: (dbg) Run:  kubectl --context addons-377526 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:177: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-377526
addons_test.go:177: (dbg) Done: out/minikube-linux-arm64 stop -p addons-377526: (12.137219094s)
addons_test.go:181: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-377526
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-377526
addons_test.go:190: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-377526
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (37.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-037055 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-037055 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.173990616s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-037055 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-037055 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-037055 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-037055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-037055
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-037055: (2.121703791s)
--- PASS: TestCertOptions (37.03s)

                                                
                                    
x
+
TestCertExpiration (254.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-659753 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1209 05:39:22.262212 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:39:31.980548 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-659753 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.057806399s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-659753 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-659753 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (36.519306663s)
helpers_test.go:175: Cleaning up "cert-expiration-659753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-659753
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-659753: (2.739132101s)
--- PASS: TestCertExpiration (254.32s)

                                                
                                    
x
+
TestForceSystemdFlag (46.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-530604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-530604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.474868053s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-530604 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-530604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-530604
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-530604: (3.044577313s)
--- PASS: TestForceSystemdFlag (46.93s)

                                                
                                    
x
+
TestForceSystemdEnv (35.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-772419 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-772419 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.302907998s)
helpers_test.go:175: Cleaning up "force-systemd-env-772419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-772419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-772419: (2.713571407s)
--- PASS: TestForceSystemdEnv (35.02s)

                                                
                                    
x
+
TestErrorSpam/setup (31.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-763363 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-763363 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-763363 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-763363 --driver=docker  --container-runtime=crio: (31.281356273s)
--- PASS: TestErrorSpam/setup (31.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (7.32s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause: exit status 80 (2.411112379s)

                                                
                                                
-- stdout --
	* Pausing node nospam-763363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:23:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause: exit status 80 (2.524777823s)

                                                
                                                
-- stdout --
	* Pausing node nospam-763363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:23:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause: exit status 80 (2.380385887s)

                                                
                                                
-- stdout --
	* Pausing node nospam-763363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:23:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.32s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause: exit status 80 (2.149059561s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-763363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:23:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause: exit status 80 (1.939676608s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-763363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:23:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause: exit status 80 (2.021029781s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-763363 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-09T04:23:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.11s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 stop: (1.321818622s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-763363 --log_dir /tmp/nospam-763363 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790468 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1209 04:24:31.984054 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:31.991561 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:32.003247 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:32.025182 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:32.066671 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:32.148194 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:32.310023 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:32.631766 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:33.273903 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:34.555837 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:37.118740 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:42.241114 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:52.483398 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-790468 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.334619148s)
--- PASS: TestFunctional/serial/StartWithProxy (79.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 04:25:00.778525 1580521 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790468 --alsologtostderr -v=8
E1209 04:25:12.964672 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-790468 --alsologtostderr -v=8: (26.660779912s)
functional_test.go:678: soft start took 26.665589169s for "functional-790468" cluster.
I1209 04:25:27.439635 1580521 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (26.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-790468 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 cache add registry.k8s.io/pause:3.1: (1.261769358s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 cache add registry.k8s.io/pause:3.3: (1.232924072s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 cache add registry.k8s.io/pause:latest: (1.123379944s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-790468 /tmp/TestFunctionalserialCacheCmdcacheadd_local1602782742/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cache add minikube-local-cache-test:functional-790468
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cache delete minikube-local-cache-test:functional-790468
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-790468
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.645099ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 kubectl -- --context functional-790468 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-790468 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790468 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 04:25:53.926968 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-790468 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.639291674s)
functional_test.go:776: restart took 37.639399235s for "functional-790468" cluster.
I1209 04:26:12.720764 1580521 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-790468 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 logs: (1.521043329s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 logs --file /tmp/TestFunctionalserialLogsFileCmd846878826/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 logs --file /tmp/TestFunctionalserialLogsFileCmd846878826/001/logs.txt: (1.514414799s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-790468 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-790468
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-790468: exit status 115 (399.462301ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30347 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-790468 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 config get cpus: exit status 14 (132.382337ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 config get cpus: exit status 14 (85.11587ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-790468 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-790468 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1606337: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790468 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-790468 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (370.71653ms)

                                                
                                                
-- stdout --
	* [functional-790468] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:26:53.685255 1605605 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:53.685514 1605605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:53.685546 1605605 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:53.685565 1605605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:53.686239 1605605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:26:53.686800 1605605 out.go:368] Setting JSON to false
	I1209 04:26:53.688008 1605605 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32954,"bootTime":1765221460,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:26:53.688133 1605605 start.go:143] virtualization:  
	I1209 04:26:53.693439 1605605 out.go:179] * [functional-790468] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:26:53.696572 1605605 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:53.696669 1605605 notify.go:221] Checking for updates...
	I1209 04:26:53.702486 1605605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:53.705504 1605605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:26:53.708482 1605605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:26:53.711388 1605605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:53.714396 1605605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:53.717837 1605605 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:26:53.718505 1605605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:53.754723 1605605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:53.754837 1605605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:53.956893 1605605 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-09 04:26:53.944234666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:53.956992 1605605 docker.go:319] overlay module found
	I1209 04:26:53.962285 1605605 out.go:179] * Using the docker driver based on existing profile
	I1209 04:26:53.965157 1605605 start.go:309] selected driver: docker
	I1209 04:26:53.965173 1605605 start.go:927] validating driver "docker" against &{Name:functional-790468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-790468 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.965273 1605605 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:53.970713 1605605 out.go:203] 
	W1209 04:26:53.973668 1605605 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 04:26:53.977884 1605605 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790468 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790468 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-790468 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (298.860838ms)

                                                
                                                
-- stdout --
	* [functional-790468] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:26:53.386627 1605501 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:26:53.386858 1605501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:53.386882 1605501 out.go:374] Setting ErrFile to fd 2...
	I1209 04:26:53.386900 1605501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:26:53.389713 1605501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:26:53.390228 1605501 out.go:368] Setting JSON to false
	I1209 04:26:53.391410 1605501 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":32954,"bootTime":1765221460,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:26:53.391520 1605501 start.go:143] virtualization:  
	I1209 04:26:53.395102 1605501 out.go:179] * [functional-790468] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1209 04:26:53.398181 1605501 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:26:53.398370 1605501 notify.go:221] Checking for updates...
	I1209 04:26:53.404490 1605501 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:26:53.407430 1605501 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:26:53.410308 1605501 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:26:53.413229 1605501 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:26:53.416244 1605501 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:26:53.421057 1605501 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 04:26:53.421671 1605501 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:26:53.469484 1605501 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:26:53.469616 1605501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:26:53.593245 1605501 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-09 04:26:53.573779479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:26:53.593351 1605501 docker.go:319] overlay module found
	I1209 04:26:53.596451 1605501 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 04:26:53.599446 1605501 start.go:309] selected driver: docker
	I1209 04:26:53.599472 1605501 start.go:927] validating driver "docker" against &{Name:functional-790468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-790468 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:26:53.599572 1605501 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:26:53.603030 1605501 out.go:203] 
	W1209 04:26:53.605897 1605501 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 04:26:53.608748 1605501 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-790468 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-790468 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-z4dqc" [80322cd2-2150-4b04-b613-d1e0d214dbc8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-z4dqc" [80322cd2-2150-4b04-b613-d1e0d214dbc8] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003230293s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32686
functional_test.go:1680: http://192.168.49.2:32686: success! body:
Request served by hello-node-connect-7d85dfc575-z4dqc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32686
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [49074fab-f814-4ef4-8e3e-1b8705613475] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003332247s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-790468 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-790468 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-790468 get pvc myclaim -o=json
I1209 04:26:27.651720 1580521 retry.go:31] will retry after 2.006686832s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:30e01341-e14c-40fb-bdba-70099386fd18 ResourceVersion:696 Generation:0 CreationTimestamp:2025-12-09 04:26:27 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-30e01341-e14c-40fb-bdba-70099386fd18 StorageClassName:0x400169ab40 VolumeMode:0x400169ab50 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-790468 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-790468 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ac051eea-bd2d-4754-be21-615a179cf69a] Pending
helpers_test.go:352: "sp-pod" [ac051eea-bd2d-4754-be21-615a179cf69a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003581124s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-790468 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-790468 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-790468 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [42e53aa1-b5b0-425e-acd2-a1506639a337] Pending
helpers_test.go:352: "sp-pod" [42e53aa1-b5b0-425e-acd2-a1506639a337] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003747227s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-790468 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh -n functional-790468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cp functional-790468:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1070591839/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh -n functional-790468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh -n functional-790468 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1580521/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /etc/test/nested/copy/1580521/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1580521.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /etc/ssl/certs/1580521.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1580521.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /usr/share/ca-certificates/1580521.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/15805212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /etc/ssl/certs/15805212.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/15805212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /usr/share/ca-certificates/15805212.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-790468 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh "sudo systemctl is-active docker": exit status 1 (367.395123ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh "sudo systemctl is-active containerd": exit status 1 (342.753095ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-790468 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-790468 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-790468 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-790468 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1602846: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-790468 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-790468 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [93fe6718-3395-44a1-95a6-3cb36af346bc] Pending
helpers_test.go:352: "nginx-svc" [93fe6718-3395-44a1-95a6-3cb36af346bc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [93fe6718-3395-44a1-95a6-3cb36af346bc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.008648988s
I1209 04:26:32.235325 1580521 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-790468 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.179.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-790468 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-790468 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-790468 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bczrj" [8d0b7ad6-98e3-435d-ad49-f5090c9b2875] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-bczrj" [8d0b7ad6-98e3-435d-ad49-f5090c9b2875] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004000653s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "393.579421ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.325757ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "378.987113ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.649673ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdany-port2449063568/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765254404606797126" to /tmp/TestFunctionalparallelMountCmdany-port2449063568/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765254404606797126" to /tmp/TestFunctionalparallelMountCmdany-port2449063568/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765254404606797126" to /tmp/TestFunctionalparallelMountCmdany-port2449063568/001/test-1765254404606797126
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.931622ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:26:44.954906 1580521 retry.go:31] will retry after 352.557936ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 04:26 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 04:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 04:26 test-1765254404606797126
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh cat /mount-9p/test-1765254404606797126
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-790468 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [cc2fd961-607a-4f04-ac57-f3e434e8056a] Pending
helpers_test.go:352: "busybox-mount" [cc2fd961-607a-4f04-ac57-f3e434e8056a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [cc2fd961-607a-4f04-ac57-f3e434e8056a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [cc2fd961-607a-4f04-ac57-f3e434e8056a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003749601s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-790468 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdany-port2449063568/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 service list -o json
functional_test.go:1504: Took "554.613686ms" to run "out/minikube-linux-arm64 -p functional-790468 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32472
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32472
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdspecific-port3464666378/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (458.068003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:26:52.117673 1580521 retry.go:31] will retry after 349.76175ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdspecific-port3464666378/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh "sudo umount -f /mount-9p": exit status 1 (349.261366ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-790468 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdspecific-port3464666378/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1832654381/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1832654381/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1832654381/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T" /mount1: exit status 1 (706.977987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:26:54.505290 1580521 retry.go:31] will retry after 674.977453ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-790468 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1832654381/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1832654381/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790468 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1832654381/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790468 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-790468
localhost/kicbase/echo-server:functional-790468
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790468 image ls --format short --alsologtostderr:
I1209 04:27:07.793696 1607910 out.go:360] Setting OutFile to fd 1 ...
I1209 04:27:07.793891 1607910 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:07.793919 1607910 out.go:374] Setting ErrFile to fd 2...
I1209 04:27:07.793936 1607910 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:07.794227 1607910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:27:07.802701 1607910 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:07.802894 1607910 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:07.803476 1607910 cli_runner.go:164] Run: docker container inspect functional-790468 --format={{.State.Status}}
I1209 04:27:07.830368 1607910 ssh_runner.go:195] Run: systemctl --version
I1209 04:27:07.830426 1607910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790468
I1209 04:27:07.859873 1607910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-790468/id_rsa Username:docker}
I1209 04:27:07.974108 1607910 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790468 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-790468  │ ce2d2cda2d858 │ 4.79MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test     │ functional-790468  │ 683acc28bafe4 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ cbad6347cca28 │ 54.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790468 image ls --format table --alsologtostderr:
I1209 04:27:08.400524 1608081 out.go:360] Setting OutFile to fd 1 ...
I1209 04:27:08.400831 1608081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:08.400863 1608081 out.go:374] Setting ErrFile to fd 2...
I1209 04:27:08.400882 1608081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:08.401176 1608081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:27:08.401896 1608081 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:08.402073 1608081 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:08.404751 1608081 cli_runner.go:164] Run: docker container inspect functional-790468 --format={{.State.Status}}
I1209 04:27:08.430275 1608081 ssh_runner.go:195] Run: systemctl --version
I1209 04:27:08.430328 1608081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790468
I1209 04:27:08.452155 1608081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-790468/id_rsa Username:docker}
I1209 04:27:08.562036 1608081 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790468 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"683acc28bafe400445d3ce44daffd5894b3ca30482fa0fb9ee9bc9c3984553de","repoDigests":["localhost/minikube-local-cache-test@sha256:8e237ec9f0deafc3c122c363702edcffb9418c7a1a0e5050aa4f2ba29463f043"],"repoTags":["localhost/minikube-local-cache-test:functional-790468"],"size":"3330"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},
{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54827372"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["reg
istry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494
990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-790468"],"size":"4789170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-s
cheduler:v1.34.2"],"size":"51592021"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12
c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790468 image ls --format json --alsologtostderr:
I1209 04:27:08.109033 1607989 out.go:360] Setting OutFile to fd 1 ...
I1209 04:27:08.109161 1607989 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:08.109167 1607989 out.go:374] Setting ErrFile to fd 2...
I1209 04:27:08.109172 1607989 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:08.109460 1607989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:27:08.110108 1607989 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:08.110215 1607989 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:08.110863 1607989 cli_runner.go:164] Run: docker container inspect functional-790468 --format={{.State.Status}}
I1209 04:27:08.141878 1607989 ssh_runner.go:195] Run: systemctl --version
I1209 04:27:08.142026 1607989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790468
I1209 04:27:08.174022 1607989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-790468/id_rsa Username:docker}
I1209 04:27:08.286536 1607989 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790468 image ls --format yaml --alsologtostderr:
- id: 683acc28bafe400445d3ce44daffd5894b3ca30482fa0fb9ee9bc9c3984553de
repoDigests:
- localhost/minikube-local-cache-test@sha256:8e237ec9f0deafc3c122c363702edcffb9418c7a1a0e5050aa4f2ba29463f043
repoTags:
- localhost/minikube-local-cache-test:functional-790468
size: "3330"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-790468
size: "4789170"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54827372"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790468 image ls --format yaml --alsologtostderr:
I1209 04:27:07.789128 1607911 out.go:360] Setting OutFile to fd 1 ...
I1209 04:27:07.789232 1607911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:07.789237 1607911 out.go:374] Setting ErrFile to fd 2...
I1209 04:27:07.789243 1607911 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:07.789562 1607911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:27:07.790649 1607911 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:07.790773 1607911 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:07.791305 1607911 cli_runner.go:164] Run: docker container inspect functional-790468 --format={{.State.Status}}
I1209 04:27:07.820198 1607911 ssh_runner.go:195] Run: systemctl --version
I1209 04:27:07.820263 1607911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790468
I1209 04:27:07.868195 1607911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-790468/id_rsa Username:docker}
I1209 04:27:07.974842 1607911 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790468 ssh pgrep buildkitd: exit status 1 (375.394806ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr: (3.453002285s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2355f849e8b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-790468
--> f6f58e3264a
Successfully tagged localhost/my-image:functional-790468
f6f58e3264afb60b0a89437950282541b1650439c7bdf4d43a271c62209481b9
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790468 image build -t localhost/my-image:functional-790468 testdata/build --alsologtostderr:
I1209 04:27:08.473103 1608087 out.go:360] Setting OutFile to fd 1 ...
I1209 04:27:08.473933 1608087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:08.473983 1608087 out.go:374] Setting ErrFile to fd 2...
I1209 04:27:08.474004 1608087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:27:08.474315 1608087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:27:08.475404 1608087 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:08.476295 1608087 config.go:182] Loaded profile config "functional-790468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 04:27:08.476921 1608087 cli_runner.go:164] Run: docker container inspect functional-790468 --format={{.State.Status}}
I1209 04:27:08.502434 1608087 ssh_runner.go:195] Run: systemctl --version
I1209 04:27:08.502502 1608087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790468
I1209 04:27:08.526952 1608087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34250 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-790468/id_rsa Username:docker}
I1209 04:27:08.633491 1608087 build_images.go:162] Building image from path: /tmp/build.2242956059.tar
I1209 04:27:08.633579 1608087 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 04:27:08.641500 1608087 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2242956059.tar
I1209 04:27:08.645292 1608087 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2242956059.tar: stat -c "%s %y" /var/lib/minikube/build/build.2242956059.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2242956059.tar': No such file or directory
I1209 04:27:08.645324 1608087 ssh_runner.go:362] scp /tmp/build.2242956059.tar --> /var/lib/minikube/build/build.2242956059.tar (3072 bytes)
I1209 04:27:08.663527 1608087 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2242956059
I1209 04:27:08.671687 1608087 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2242956059 -xf /var/lib/minikube/build/build.2242956059.tar
I1209 04:27:08.679912 1608087 crio.go:315] Building image: /var/lib/minikube/build/build.2242956059
I1209 04:27:08.679979 1608087 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-790468 /var/lib/minikube/build/build.2242956059 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1209 04:27:11.825744 1608087 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-790468 /var/lib/minikube/build/build.2242956059 --cgroup-manager=cgroupfs: (3.145739027s)
I1209 04:27:11.825823 1608087 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2242956059
I1209 04:27:11.833882 1608087 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2242956059.tar
I1209 04:27:11.843943 1608087 build_images.go:218] Built localhost/my-image:functional-790468 from /tmp/build.2242956059.tar
I1209 04:27:11.843972 1608087 build_images.go:134] succeeded building to: functional-790468
I1209 04:27:11.843978 1608087 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-790468
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image load --daemon kicbase/echo-server:functional-790468 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-790468 image load --daemon kicbase/echo-server:functional-790468 --alsologtostderr: (3.968644655s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image load --daemon kicbase/echo-server:functional-790468 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls
2025/12/09 04:27:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-790468
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image load --daemon kicbase/echo-server:functional-790468 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image save kicbase/echo-server:functional-790468 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image rm kicbase/echo-server:functional-790468 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-790468
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-790468 image save --daemon kicbase/echo-server:functional-790468 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-790468
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-790468
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-790468
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-790468
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22081-1577059/.minikube/files/etc/test/nested/copy/1580521/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 cache add registry.k8s.io/pause:3.1: (1.32967111s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 cache add registry.k8s.io/pause:3.3: (1.341405799s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 cache add registry.k8s.io/pause:latest: (1.147433647s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach255071761/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cache add minikube-local-cache-test:functional-331811
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cache delete minikube-local-cache-test:functional-331811
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-331811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.3812ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1253091348/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 config get cpus: exit status 14 (119.196477ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 config get cpus: exit status 14 (68.346649ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (205.289078ms)

                                                
                                                
-- stdout --
	* [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:56:29.075668 1637790 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:56:29.076080 1637790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.076086 1637790 out.go:374] Setting ErrFile to fd 2...
	I1209 04:56:29.076091 1637790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:29.076573 1637790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:56:29.077064 1637790 out.go:368] Setting JSON to false
	I1209 04:56:29.078093 1637790 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":34729,"bootTime":1765221460,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:56:29.078179 1637790 start.go:143] virtualization:  
	I1209 04:56:29.082313 1637790 out.go:179] * [functional-331811] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1209 04:56:29.086405 1637790 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:56:29.086830 1637790 notify.go:221] Checking for updates...
	I1209 04:56:29.093374 1637790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:56:29.096335 1637790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:56:29.099831 1637790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:56:29.102802 1637790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:56:29.105780 1637790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:56:29.109300 1637790 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:56:29.109871 1637790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:56:29.153442 1637790 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:56:29.153576 1637790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.214403 1637790 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:29.204956297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.214544 1637790 docker.go:319] overlay module found
	I1209 04:56:29.217618 1637790 out.go:179] * Using the docker driver based on existing profile
	I1209 04:56:29.220576 1637790 start.go:309] selected driver: docker
	I1209 04:56:29.220605 1637790 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.220696 1637790 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:56:29.224320 1637790 out.go:203] 
	W1209 04:56:29.227262 1637790 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 04:56:29.230112 1637790 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-331811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-331811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (202.34382ms)

                                                
                                                
-- stdout --
	* [functional-331811] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 04:56:28.879079 1637743 out.go:360] Setting OutFile to fd 1 ...
	I1209 04:56:28.879258 1637743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:28.879290 1637743 out.go:374] Setting ErrFile to fd 2...
	I1209 04:56:28.879312 1637743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 04:56:28.879715 1637743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 04:56:28.880152 1637743 out.go:368] Setting JSON to false
	I1209 04:56:28.881014 1637743 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":34729,"bootTime":1765221460,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1209 04:56:28.881107 1637743 start.go:143] virtualization:  
	I1209 04:56:28.884673 1637743 out.go:179] * [functional-331811] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1209 04:56:28.888663 1637743 out.go:179]   - MINIKUBE_LOCATION=22081
	I1209 04:56:28.888728 1637743 notify.go:221] Checking for updates...
	I1209 04:56:28.894441 1637743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 04:56:28.897311 1637743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	I1209 04:56:28.900071 1637743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	I1209 04:56:28.902819 1637743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1209 04:56:28.905731 1637743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 04:56:28.909017 1637743 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1209 04:56:28.909635 1637743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1209 04:56:28.945864 1637743 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1209 04:56:28.945978 1637743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 04:56:29.008150 1637743 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-09 04:56:28.995972849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 04:56:29.008263 1637743 docker.go:319] overlay module found
	I1209 04:56:29.011385 1637743 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1209 04:56:29.014314 1637743 start.go:309] selected driver: docker
	I1209 04:56:29.014344 1637743 start.go:927] validating driver "docker" against &{Name:functional-331811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331811 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 04:56:29.014460 1637743 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 04:56:29.018096 1637743 out.go:203] 
	W1209 04:56:29.020999 1637743 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 04:56:29.023871 1637743 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh -n functional-331811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cp functional-331811:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp691637449/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh -n functional-331811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh -n functional-331811 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1580521/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /etc/test/nested/copy/1580521/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1580521.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /etc/ssl/certs/1580521.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1580521.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /usr/share/ca-certificates/1580521.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/15805212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /etc/ssl/certs/15805212.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/15805212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /usr/share/ca-certificates/15805212.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "sudo systemctl is-active docker": exit status 1 (270.294897ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "sudo systemctl is-active containerd": exit status 1 (304.531079ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-331811 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "337.706318ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.269424ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "342.507053ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.466431ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1191664544/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (337.265888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:56:22.251666 1580521 retry.go:31] will retry after 653.792019ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1191664544/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "sudo umount -f /mount-9p": exit status 1 (267.591898ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-331811 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1191664544/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T" /mount1: exit status 1 (554.758036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 04:56:24.501683 1580521 retry.go:31] will retry after 308.921037ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-331811 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-331811 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3534500844/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-331811 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-331811
localhost/kicbase/echo-server:functional-331811
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-331811 image ls --format short --alsologtostderr:
I1209 04:56:41.714543 1639927 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:41.714712 1639927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:41.714722 1639927 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:41.714728 1639927 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:41.714972 1639927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:41.715577 1639927 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:41.715696 1639927 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:41.716241 1639927 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:41.732879 1639927 ssh_runner.go:195] Run: systemctl --version
I1209 04:56:41.732938 1639927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:41.750661 1639927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
I1209 04:56:41.857507 1639927 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-331811 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-331811  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/my-image                      │ functional-331811  │ 714c6de5ff97e │ 1.64MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/minikube-local-cache-test     │ functional-331811  │ 683acc28bafe4 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-331811 image ls --format table --alsologtostderr:
I1209 04:56:46.405744 1640428 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:46.405881 1640428 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:46.405894 1640428 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:46.405914 1640428 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:46.406204 1640428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:46.406872 1640428 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:46.407009 1640428 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:46.407548 1640428 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:46.425037 1640428 ssh_runner.go:195] Run: systemctl --version
I1209 04:56:46.425097 1640428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:46.442156 1640428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
I1209 04:56:46.545178 1640428 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-331811 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-331811"],"size":"4788229"},{"id":"683acc28bafe400445d3ce44daffd5894b3ca30482fa0fb9ee9bc9c3984553de","repoDigests":["localhost/minikube-local-cache-test@sha256:8e237ec9f0deafc3c122c363702edcffb9418c7a1a0e5050aa4f2ba29463f043"],"repoTags":["localhost/minikube-local-cache-test:functional-331811"],"size":"3330"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox
@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":
"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83
ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"5f468f8ded3af13d99ce17a7be0b4d36f8
255859346e3268aecf20687fbd6a0c","repoDigests":["docker.io/library/7bbaf51b02e7e51051b419c5a03476a544ea793948728daa4853f3a52befd371-tmp@sha256:ee419a60d02878a663e1dcfd410491abaf4d01487ffcb247e1ce409220eed4a0"],"repoTags":[],"size":"1638178"},{"id":"714c6de5ff97e63e2c04c601cb24338a755dbe3dde0b7da907d3f42a042ebdec","repoDigests":["localhost/my-image@sha256:e9a53abce67d050492e785c054ac1622d137433e70993d11600652194cd75797"],"repoTags":["localhost/my-image:functional-331811"],"size":"1640789"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee
3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-331811 image ls --format json --alsologtostderr:
I1209 04:56:46.154592 1640385 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:46.154727 1640385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:46.154745 1640385 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:46.154751 1640385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:46.155112 1640385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:46.156083 1640385 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:46.156229 1640385 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:46.157009 1640385 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:46.175064 1640385 ssh_runner.go:195] Run: systemctl --version
I1209 04:56:46.175126 1640385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:46.192180 1640385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
I1209 04:56:46.297288 1640385 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-331811 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-331811
size: "4788229"
- id: 683acc28bafe400445d3ce44daffd5894b3ca30482fa0fb9ee9bc9c3984553de
repoDigests:
- localhost/minikube-local-cache-test@sha256:8e237ec9f0deafc3c122c363702edcffb9418c7a1a0e5050aa4f2ba29463f043
repoTags:
- localhost/minikube-local-cache-test:functional-331811
size: "3330"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-331811 image ls --format yaml --alsologtostderr:
I1209 04:56:41.958783 1639969 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:41.958999 1639969 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:41.959026 1639969 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:41.959046 1639969 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:41.959375 1639969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:41.960080 1639969 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:41.960260 1639969 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:41.960898 1639969 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:41.978815 1639969 ssh_runner.go:195] Run: systemctl --version
I1209 04:56:41.978869 1639969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:41.996498 1639969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
I1209 04:56:42.113091 1639969 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-331811 ssh pgrep buildkitd: exit status 1 (288.304701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image build -t localhost/my-image:functional-331811 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-331811 image build -t localhost/my-image:functional-331811 testdata/build --alsologtostderr: (3.394541326s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-331811 image build -t localhost/my-image:functional-331811 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5f468f8ded3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-331811
--> 714c6de5ff9
Successfully tagged localhost/my-image:functional-331811
714c6de5ff97e63e2c04c601cb24338a755dbe3dde0b7da907d3f42a042ebdec
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-331811 image build -t localhost/my-image:functional-331811 testdata/build --alsologtostderr:
I1209 04:56:42.507279 1640068 out.go:360] Setting OutFile to fd 1 ...
I1209 04:56:42.507518 1640068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:42.507547 1640068 out.go:374] Setting ErrFile to fd 2...
I1209 04:56:42.507566 1640068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:56:42.507850 1640068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
I1209 04:56:42.508533 1640068 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:42.509226 1640068 config.go:182] Loaded profile config "functional-331811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1209 04:56:42.509880 1640068 cli_runner.go:164] Run: docker container inspect functional-331811 --format={{.State.Status}}
I1209 04:56:42.527900 1640068 ssh_runner.go:195] Run: systemctl --version
I1209 04:56:42.527961 1640068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331811
I1209 04:56:42.545799 1640068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34255 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/functional-331811/id_rsa Username:docker}
I1209 04:56:42.661794 1640068 build_images.go:162] Building image from path: /tmp/build.193835143.tar
I1209 04:56:42.661869 1640068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 04:56:42.669813 1640068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.193835143.tar
I1209 04:56:42.673469 1640068 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.193835143.tar: stat -c "%s %y" /var/lib/minikube/build/build.193835143.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.193835143.tar': No such file or directory
I1209 04:56:42.673499 1640068 ssh_runner.go:362] scp /tmp/build.193835143.tar --> /var/lib/minikube/build/build.193835143.tar (3072 bytes)
I1209 04:56:42.694690 1640068 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.193835143
I1209 04:56:42.703772 1640068 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.193835143 -xf /var/lib/minikube/build/build.193835143.tar
I1209 04:56:42.712386 1640068 crio.go:315] Building image: /var/lib/minikube/build/build.193835143
I1209 04:56:42.712464 1640068 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-331811 /var/lib/minikube/build/build.193835143 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1209 04:56:45.823480 1640068 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-331811 /var/lib/minikube/build/build.193835143 --cgroup-manager=cgroupfs: (3.110990393s)
I1209 04:56:45.823551 1640068 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.193835143
I1209 04:56:45.831396 1640068 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.193835143.tar
I1209 04:56:45.839151 1640068 build_images.go:218] Built localhost/my-image:functional-331811 from /tmp/build.193835143.tar
I1209 04:56:45.839182 1640068 build_images.go:134] succeeded building to: functional-331811
I1209 04:56:45.839188 1640068 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-331811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-331811
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image load --daemon kicbase/echo-server:functional-331811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image save kicbase/echo-server:functional-331811 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image rm kicbase/echo-server:functional-331811 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-331811
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 image save --daemon kicbase/echo-server:functional-331811 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-331811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-331811 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-331811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-331811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-331811
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (220.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1209 04:59:22.262353 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.272053 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.284037 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.305326 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.346673 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.428063 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.589557 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:22.911165 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:23.553238 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:24.834613 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:27.395923 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:31.980241 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:32.518203 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:59:42.759601 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:00:03.241681 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:00:44.203112 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:01:21.780903 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m39.078909519s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
E1209 05:02:06.125041 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (220.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 kubectl -- rollout status deployment/busybox: (5.508476238s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-5fvp7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-bp5sh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-dt58k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-5fvp7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-bp5sh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-dt58k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-5fvp7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-bp5sh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-dt58k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-5fvp7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-5fvp7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-bp5sh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-bp5sh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-dt58k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 kubectl -- exec busybox-7b57f96db7-dt58k -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 node add --alsologtostderr -v 5: (58.808426022s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: (1.134342996s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-634473 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077706295s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 status --output json --alsologtostderr -v 5: (1.069296639s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp testdata/cp-test.txt ha-634473:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2832400682/001/cp-test_ha-634473.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473:/home/docker/cp-test.txt ha-634473-m02:/home/docker/cp-test_ha-634473_ha-634473-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test_ha-634473_ha-634473-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473:/home/docker/cp-test.txt ha-634473-m03:/home/docker/cp-test_ha-634473_ha-634473-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test_ha-634473_ha-634473-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473:/home/docker/cp-test.txt ha-634473-m04:/home/docker/cp-test_ha-634473_ha-634473-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test_ha-634473_ha-634473-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp testdata/cp-test.txt ha-634473-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2832400682/001/cp-test_ha-634473-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m02:/home/docker/cp-test.txt ha-634473:/home/docker/cp-test_ha-634473-m02_ha-634473.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test_ha-634473-m02_ha-634473.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m02:/home/docker/cp-test.txt ha-634473-m03:/home/docker/cp-test_ha-634473-m02_ha-634473-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test_ha-634473-m02_ha-634473-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m02:/home/docker/cp-test.txt ha-634473-m04:/home/docker/cp-test_ha-634473-m02_ha-634473-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test_ha-634473-m02_ha-634473-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp testdata/cp-test.txt ha-634473-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2832400682/001/cp-test_ha-634473-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt ha-634473:/home/docker/cp-test_ha-634473-m03_ha-634473.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test_ha-634473-m03_ha-634473.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt ha-634473-m02:/home/docker/cp-test_ha-634473-m03_ha-634473-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test_ha-634473-m03_ha-634473-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m03:/home/docker/cp-test.txt ha-634473-m04:/home/docker/cp-test_ha-634473-m03_ha-634473-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test_ha-634473-m03_ha-634473-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp testdata/cp-test.txt ha-634473-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2832400682/001/cp-test_ha-634473-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt ha-634473:/home/docker/cp-test_ha-634473-m04_ha-634473.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473 "sudo cat /home/docker/cp-test_ha-634473-m04_ha-634473.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt ha-634473-m02:/home/docker/cp-test_ha-634473-m04_ha-634473-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m02 "sudo cat /home/docker/cp-test_ha-634473-m04_ha-634473-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 cp ha-634473-m04:/home/docker/cp-test.txt ha-634473-m03:/home/docker/cp-test_ha-634473-m04_ha-634473-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 ssh -n ha-634473-m03 "sudo cat /home/docker/cp-test_ha-634473-m04_ha-634473-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 node stop m02 --alsologtostderr -v 5: (12.091405688s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 7 (821.873086ms)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-634473-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-634473-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:03:49.787704 1656225 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:03:49.787872 1656225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:03:49.787885 1656225 out.go:374] Setting ErrFile to fd 2...
	I1209 05:03:49.787891 1656225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:03:49.788266 1656225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:03:49.788532 1656225 out.go:368] Setting JSON to false
	I1209 05:03:49.788575 1656225 mustload.go:66] Loading cluster: ha-634473
	I1209 05:03:49.789398 1656225 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:03:49.789420 1656225 status.go:174] checking status of ha-634473 ...
	I1209 05:03:49.790297 1656225 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:03:49.790853 1656225 notify.go:221] Checking for updates...
	I1209 05:03:49.819877 1656225 status.go:371] ha-634473 host status = "Running" (err=<nil>)
	I1209 05:03:49.819914 1656225 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:03:49.820306 1656225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473
	I1209 05:03:49.843269 1656225 host.go:66] Checking if "ha-634473" exists ...
	I1209 05:03:49.843601 1656225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:03:49.843665 1656225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473
	I1209 05:03:49.864031 1656225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34260 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473/id_rsa Username:docker}
	I1209 05:03:49.972556 1656225 ssh_runner.go:195] Run: systemctl --version
	I1209 05:03:49.979649 1656225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:03:49.995054 1656225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:03:50.073141 1656225 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-09 05:03:50.062857974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:03:50.073743 1656225 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:03:50.073782 1656225 api_server.go:166] Checking apiserver status ...
	I1209 05:03:50.073844 1656225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:03:50.087734 1656225 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1209 05:03:50.097341 1656225 api_server.go:182] apiserver freezer: "2:freezer:/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced"
	I1209 05:03:50.097415 1656225 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/451a940c6775333987f96bda1a8dac55be755a72cdd93ec853e9dcbc59469bf4/crio/crio-f22a05924eab128b6621d22ab5e9561c5dc32a3192e4c7c7de9d896fd57d6ced/freezer.state
	I1209 05:03:50.106464 1656225 api_server.go:204] freezer state: "THAWED"
	I1209 05:03:50.106498 1656225 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:03:50.115225 1656225 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:03:50.115259 1656225 status.go:463] ha-634473 apiserver status = Running (err=<nil>)
	I1209 05:03:50.115271 1656225 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:03:50.115289 1656225 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:03:50.115657 1656225 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:03:50.134069 1656225 status.go:371] ha-634473-m02 host status = "Stopped" (err=<nil>)
	I1209 05:03:50.134095 1656225 status.go:384] host is not running, skipping remaining checks
	I1209 05:03:50.134104 1656225 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:03:50.134126 1656225 status.go:174] checking status of ha-634473-m03 ...
	I1209 05:03:50.134446 1656225 cli_runner.go:164] Run: docker container inspect ha-634473-m03 --format={{.State.Status}}
	I1209 05:03:50.152111 1656225 status.go:371] ha-634473-m03 host status = "Running" (err=<nil>)
	I1209 05:03:50.152147 1656225 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:03:50.152475 1656225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m03
	I1209 05:03:50.181667 1656225 host.go:66] Checking if "ha-634473-m03" exists ...
	I1209 05:03:50.182524 1656225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:03:50.182611 1656225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m03
	I1209 05:03:50.201726 1656225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34270 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m03/id_rsa Username:docker}
	I1209 05:03:50.312264 1656225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:03:50.327702 1656225 kubeconfig.go:125] found "ha-634473" server: "https://192.168.49.254:8443"
	I1209 05:03:50.327734 1656225 api_server.go:166] Checking apiserver status ...
	I1209 05:03:50.327784 1656225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:03:50.339680 1656225 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1209 05:03:50.348410 1656225 api_server.go:182] apiserver freezer: "2:freezer:/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f"
	I1209 05:03:50.348507 1656225 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4096476f12329d36066415868bf1371a304c4e35cf5869220e753759e4326bd5/crio/crio-030ab8745dc3e732a1578e60ecfe89b581303f4356948b70e019e0b0f8293a4f/freezer.state
	I1209 05:03:50.364544 1656225 api_server.go:204] freezer state: "THAWED"
	I1209 05:03:50.364585 1656225 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1209 05:03:50.374477 1656225 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1209 05:03:50.374508 1656225 status.go:463] ha-634473-m03 apiserver status = Running (err=<nil>)
	I1209 05:03:50.374518 1656225 status.go:176] ha-634473-m03 status: &{Name:ha-634473-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:03:50.374535 1656225 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:03:50.374900 1656225 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:03:50.393930 1656225 status.go:371] ha-634473-m04 host status = "Running" (err=<nil>)
	I1209 05:03:50.393956 1656225 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:03:50.394270 1656225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-634473-m04
	I1209 05:03:50.416458 1656225 host.go:66] Checking if "ha-634473-m04" exists ...
	I1209 05:03:50.416760 1656225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:03:50.416800 1656225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-634473-m04
	I1209 05:03:50.435391 1656225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34275 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/ha-634473-m04/id_rsa Username:docker}
	I1209 05:03:50.540218 1656225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:03:50.554312 1656225 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.025724739s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (160.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 stop --alsologtostderr -v 5: (37.428641652s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 start --wait true --alsologtostderr -v 5
E1209 05:14:15.061708 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:14:22.262652 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:14:31.980367 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 start --wait true --alsologtostderr -v 5: (2m2.420340445s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (160.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 node delete m03 --alsologtostderr -v 5: (11.17450953s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 stop --alsologtostderr -v 5
E1209 05:15:45.332615 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 stop --alsologtostderr -v 5: (36.054863905s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: exit status 7 (122.586223ms)

                                                
                                                
-- stdout --
	ha-634473
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-634473-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-634473-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:15:51.147827 1670256 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:15:51.148061 1670256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:15:51.148089 1670256 out.go:374] Setting ErrFile to fd 2...
	I1209 05:15:51.148107 1670256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:15:51.148424 1670256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:15:51.148655 1670256 out.go:368] Setting JSON to false
	I1209 05:15:51.148724 1670256 mustload.go:66] Loading cluster: ha-634473
	I1209 05:15:51.148831 1670256 notify.go:221] Checking for updates...
	I1209 05:15:51.149233 1670256 config.go:182] Loaded profile config "ha-634473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:15:51.149270 1670256 status.go:174] checking status of ha-634473 ...
	I1209 05:15:51.149873 1670256 cli_runner.go:164] Run: docker container inspect ha-634473 --format={{.State.Status}}
	I1209 05:15:51.169736 1670256 status.go:371] ha-634473 host status = "Stopped" (err=<nil>)
	I1209 05:15:51.169757 1670256 status.go:384] host is not running, skipping remaining checks
	I1209 05:15:51.169764 1670256 status.go:176] ha-634473 status: &{Name:ha-634473 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:15:51.169795 1670256 status.go:174] checking status of ha-634473-m02 ...
	I1209 05:15:51.170095 1670256 cli_runner.go:164] Run: docker container inspect ha-634473-m02 --format={{.State.Status}}
	I1209 05:15:51.191099 1670256 status.go:371] ha-634473-m02 host status = "Stopped" (err=<nil>)
	I1209 05:15:51.191118 1670256 status.go:384] host is not running, skipping remaining checks
	I1209 05:15:51.191139 1670256 status.go:176] ha-634473-m02 status: &{Name:ha-634473-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:15:51.191160 1670256 status.go:174] checking status of ha-634473-m04 ...
	I1209 05:15:51.191457 1670256 cli_runner.go:164] Run: docker container inspect ha-634473-m04 --format={{.State.Status}}
	I1209 05:15:51.214147 1670256 status.go:371] ha-634473-m04 host status = "Stopped" (err=<nil>)
	I1209 05:15:51.214167 1670256 status.go:384] host is not running, skipping remaining checks
	I1209 05:15:51.214173 1670256 status.go:176] ha-634473-m04 status: &{Name:ha-634473-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1209 05:16:21.786009 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.307311755s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (94.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 node add --control-plane --alsologtostderr -v 5: (1m33.057829282s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-634473 status --alsologtostderr -v 5: (1.138832107s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (94.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.126289818s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-114438 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1209 05:19:22.262960 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:19:31.980184 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-114438 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (52.378925476s)
--- PASS: TestJSONOutput/start/Command (52.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-114438 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-114438 --output=json --user=testUser: (5.853600658s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-457359 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-457359 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.122271ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49114782-8362-4003-a63d-8ae92b727e7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-457359] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c607d74a-7e49-4208-b9ce-e57289c6753e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"d1a37dbe-35b8-4dbb-b41a-f7d91ea3211a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"93dd7e42-68ff-454d-9569-a8f508201d48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig"}}
	{"specversion":"1.0","id":"fb278c2c-5c43-4455-8a81-1354166672b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube"}}
	{"specversion":"1.0","id":"e188a34e-522f-4479-953c-000d185f602f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"983e16be-1517-40a5-b17f-4e8ff62cce8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8f36b66-78d8-4aee-8c29-60e47222358b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-457359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-457359
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-760785 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-760785 --network=: (37.96385162s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-760785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-760785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-760785: (2.26743022s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.26s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-039086 --network=bridge
E1209 05:21:04.856420 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:21:21.786749 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-039086 --network=bridge: (34.789684003s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-039086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-039086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-039086: (2.164085621s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.98s)

                                                
                                    
x
+
TestKicExistingNetwork (35.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1209 05:21:32.343887 1580521 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 05:21:32.360285 1580521 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 05:21:32.360380 1580521 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1209 05:21:32.360403 1580521 cli_runner.go:164] Run: docker network inspect existing-network
W1209 05:21:32.376750 1580521 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1209 05:21:32.376792 1580521 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1209 05:21:32.376807 1580521 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1209 05:21:32.376926 1580521 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 05:21:32.394171 1580521 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7a642f98ac35 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:e5:1c:28:3f:19} reservation:<nil>}
I1209 05:21:32.394506 1580521 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001716160}
I1209 05:21:32.394528 1580521 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1209 05:21:32.394595 1580521 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1209 05:21:32.458109 1580521 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-135598 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-135598 --network=existing-network: (32.866485808s)
helpers_test.go:175: Cleaning up "existing-network-135598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-135598
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-135598: (2.10894514s)
I1209 05:22:07.451009 1580521 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.12s)

                                                
                                    
x
+
TestKicCustomSubnet (36.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-780574 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-780574 --subnet=192.168.60.0/24: (34.580998743s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-780574 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-780574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-780574
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-780574: (2.309821154s)
--- PASS: TestKicCustomSubnet (36.92s)

                                                
                                    
x
+
TestKicStaticIP (35.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-088572 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-088572 --static-ip=192.168.200.200: (33.38457673s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-088572 ip
helpers_test.go:175: Cleaning up "static-ip-088572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-088572
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-088572: (2.189114645s)
--- PASS: TestKicStaticIP (35.73s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-207634 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-207634 --driver=docker  --container-runtime=crio: (36.329336549s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-210252 --driver=docker  --container-runtime=crio
E1209 05:24:22.262641 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-210252 --driver=docker  --container-runtime=crio: (35.155263327s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-207634
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
E1209 05:24:31.980219 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-210252
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-210252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-210252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-210252: (2.174942184s)
helpers_test.go:175: Cleaning up "first-207634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-207634
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-207634: (2.063390811s)
--- PASS: TestMinikubeProfile (77.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-890282 --memory=3072 --mount-string /tmp/TestMountStartserial493503801/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-890282 --memory=3072 --mount-string /tmp/TestMountStartserial493503801/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.719882996s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-890282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-892227 --memory=3072 --mount-string /tmp/TestMountStartserial493503801/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-892227 --memory=3072 --mount-string /tmp/TestMountStartserial493503801/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.079150157s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-892227 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-890282 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-890282 --alsologtostderr -v=5: (1.73079405s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-892227 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-892227
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-892227: (1.295990706s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-892227
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-892227: (6.85731543s)
--- PASS: TestMountStart/serial/RestartStopped (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-892227 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-765524 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1209 05:26:21.780626 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-765524 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.89388067s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-765524 -- rollout status deployment/busybox: (4.213197082s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-fbhvh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-h2rzn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-fbhvh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-h2rzn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-fbhvh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-h2rzn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-fbhvh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-fbhvh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-h2rzn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-765524 -- exec busybox-7b57f96db7-h2rzn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-765524 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-765524 -v=5 --alsologtostderr: (58.592951735s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.32s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-765524 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp testdata/cp-test.txt multinode-765524:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1662026912/001/cp-test_multinode-765524.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524:/home/docker/cp-test.txt multinode-765524-m02:/home/docker/cp-test_multinode-765524_multinode-765524-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m02 "sudo cat /home/docker/cp-test_multinode-765524_multinode-765524-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524:/home/docker/cp-test.txt multinode-765524-m03:/home/docker/cp-test_multinode-765524_multinode-765524-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m03 "sudo cat /home/docker/cp-test_multinode-765524_multinode-765524-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp testdata/cp-test.txt multinode-765524-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1662026912/001/cp-test_multinode-765524-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524-m02:/home/docker/cp-test.txt multinode-765524:/home/docker/cp-test_multinode-765524-m02_multinode-765524.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524 "sudo cat /home/docker/cp-test_multinode-765524-m02_multinode-765524.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524-m02:/home/docker/cp-test.txt multinode-765524-m03:/home/docker/cp-test_multinode-765524-m02_multinode-765524-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m03 "sudo cat /home/docker/cp-test_multinode-765524-m02_multinode-765524-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp testdata/cp-test.txt multinode-765524-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1662026912/001/cp-test_multinode-765524-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524-m03:/home/docker/cp-test.txt multinode-765524:/home/docker/cp-test_multinode-765524-m03_multinode-765524.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524 "sudo cat /home/docker/cp-test_multinode-765524-m03_multinode-765524.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 cp multinode-765524-m03:/home/docker/cp-test.txt multinode-765524-m02:/home/docker/cp-test_multinode-765524-m03_multinode-765524-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 ssh -n multinode-765524-m02 "sudo cat /home/docker/cp-test_multinode-765524-m03_multinode-765524-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-765524 node stop m03: (1.348506268s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-765524 status: exit status 7 (573.714328ms)

                                                
                                                
-- stdout --
	multinode-765524
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-765524-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-765524-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr: exit status 7 (541.074712ms)

                                                
                                                
-- stdout --
	multinode-765524
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-765524-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-765524-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:28:48.413190 1721228 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:28:48.413361 1721228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:28:48.413393 1721228 out.go:374] Setting ErrFile to fd 2...
	I1209 05:28:48.413414 1721228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:28:48.413699 1721228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:28:48.413914 1721228 out.go:368] Setting JSON to false
	I1209 05:28:48.413988 1721228 mustload.go:66] Loading cluster: multinode-765524
	I1209 05:28:48.414064 1721228 notify.go:221] Checking for updates...
	I1209 05:28:48.414482 1721228 config.go:182] Loaded profile config "multinode-765524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:28:48.414522 1721228 status.go:174] checking status of multinode-765524 ...
	I1209 05:28:48.415101 1721228 cli_runner.go:164] Run: docker container inspect multinode-765524 --format={{.State.Status}}
	I1209 05:28:48.435358 1721228 status.go:371] multinode-765524 host status = "Running" (err=<nil>)
	I1209 05:28:48.435385 1721228 host.go:66] Checking if "multinode-765524" exists ...
	I1209 05:28:48.435709 1721228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-765524
	I1209 05:28:48.459316 1721228 host.go:66] Checking if "multinode-765524" exists ...
	I1209 05:28:48.459621 1721228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:28:48.459674 1721228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-765524
	I1209 05:28:48.476255 1721228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34381 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/multinode-765524/id_rsa Username:docker}
	I1209 05:28:48.580569 1721228 ssh_runner.go:195] Run: systemctl --version
	I1209 05:28:48.587381 1721228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:28:48.601934 1721228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 05:28:48.659212 1721228 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-09 05:28:48.649392972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1209 05:28:48.659768 1721228 kubeconfig.go:125] found "multinode-765524" server: "https://192.168.67.2:8443"
	I1209 05:28:48.659806 1721228 api_server.go:166] Checking apiserver status ...
	I1209 05:28:48.659852 1721228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 05:28:48.672493 1721228 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup
	I1209 05:28:48.681135 1721228 api_server.go:182] apiserver freezer: "2:freezer:/docker/109bd1a908548a63391aa3b92ac2a71d75876bafcc62e6b6e4e895ad336e84b3/crio/crio-73679462dc4c817eb8b33c096efd14503e3925eb5b4eaa27b7c1c5e5df139588"
	I1209 05:28:48.681250 1721228 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/109bd1a908548a63391aa3b92ac2a71d75876bafcc62e6b6e4e895ad336e84b3/crio/crio-73679462dc4c817eb8b33c096efd14503e3925eb5b4eaa27b7c1c5e5df139588/freezer.state
	I1209 05:28:48.689544 1721228 api_server.go:204] freezer state: "THAWED"
	I1209 05:28:48.689578 1721228 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1209 05:28:48.704127 1721228 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1209 05:28:48.704159 1721228 status.go:463] multinode-765524 apiserver status = Running (err=<nil>)
	I1209 05:28:48.704171 1721228 status.go:176] multinode-765524 status: &{Name:multinode-765524 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:28:48.704218 1721228 status.go:174] checking status of multinode-765524-m02 ...
	I1209 05:28:48.704578 1721228 cli_runner.go:164] Run: docker container inspect multinode-765524-m02 --format={{.State.Status}}
	I1209 05:28:48.724136 1721228 status.go:371] multinode-765524-m02 host status = "Running" (err=<nil>)
	I1209 05:28:48.724161 1721228 host.go:66] Checking if "multinode-765524-m02" exists ...
	I1209 05:28:48.724480 1721228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-765524-m02
	I1209 05:28:48.742351 1721228 host.go:66] Checking if "multinode-765524-m02" exists ...
	I1209 05:28:48.742724 1721228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 05:28:48.742805 1721228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-765524-m02
	I1209 05:28:48.761819 1721228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34386 SSHKeyPath:/home/jenkins/minikube-integration/22081-1577059/.minikube/machines/multinode-765524-m02/id_rsa Username:docker}
	I1209 05:28:48.868107 1721228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 05:28:48.881046 1721228 status.go:176] multinode-765524-m02 status: &{Name:multinode-765524-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:28:48.881083 1721228 status.go:174] checking status of multinode-765524-m03 ...
	I1209 05:28:48.881385 1721228 cli_runner.go:164] Run: docker container inspect multinode-765524-m03 --format={{.State.Status}}
	I1209 05:28:48.899747 1721228 status.go:371] multinode-765524-m03 host status = "Stopped" (err=<nil>)
	I1209 05:28:48.899771 1721228 status.go:384] host is not running, skipping remaining checks
	I1209 05:28:48.899779 1721228 status.go:176] multinode-765524-m03 status: &{Name:multinode-765524-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-765524 node start m03 -v=5 --alsologtostderr: (8.212005605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-765524
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-765524
E1209 05:29:22.262279 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-765524: (25.092056362s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-765524 --wait=true -v=5 --alsologtostderr
E1209 05:29:31.979864 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-765524 --wait=true -v=5 --alsologtostderr: (50.838889886s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-765524
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-765524 node delete m03: (5.03231014s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-765524 stop: (23.831247732s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-765524 status: exit status 7 (97.85311ms)

                                                
                                                
-- stdout --
	multinode-765524
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-765524-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr: exit status 7 (95.337215ms)

                                                
                                                
-- stdout --
	multinode-765524
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-765524-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 05:30:43.706451 1729092 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:30:43.706676 1729092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:30:43.706706 1729092 out.go:374] Setting ErrFile to fd 2...
	I1209 05:30:43.706726 1729092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:30:43.707001 1729092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:30:43.707229 1729092 out.go:368] Setting JSON to false
	I1209 05:30:43.707293 1729092 mustload.go:66] Loading cluster: multinode-765524
	I1209 05:30:43.707388 1729092 notify.go:221] Checking for updates...
	I1209 05:30:43.707773 1729092 config.go:182] Loaded profile config "multinode-765524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:30:43.707808 1729092 status.go:174] checking status of multinode-765524 ...
	I1209 05:30:43.708743 1729092 cli_runner.go:164] Run: docker container inspect multinode-765524 --format={{.State.Status}}
	I1209 05:30:43.731115 1729092 status.go:371] multinode-765524 host status = "Stopped" (err=<nil>)
	I1209 05:30:43.731137 1729092 status.go:384] host is not running, skipping remaining checks
	I1209 05:30:43.731145 1729092 status.go:176] multinode-765524 status: &{Name:multinode-765524 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1209 05:30:43.731178 1729092 status.go:174] checking status of multinode-765524-m02 ...
	I1209 05:30:43.731525 1729092 cli_runner.go:164] Run: docker container inspect multinode-765524-m02 --format={{.State.Status}}
	I1209 05:30:43.752349 1729092 status.go:371] multinode-765524-m02 host status = "Stopped" (err=<nil>)
	I1209 05:30:43.752373 1729092 status.go:384] host is not running, skipping remaining checks
	I1209 05:30:43.752381 1729092 status.go:176] multinode-765524-m02 status: &{Name:multinode-765524-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-765524 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1209 05:30:55.063874 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:31:21.780647 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-765524 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (56.556889275s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-765524 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-765524
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-765524-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-765524-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.915292ms)

                                                
                                                
-- stdout --
	* [multinode-765524-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-765524-m02' is duplicated with machine name 'multinode-765524-m02' in profile 'multinode-765524'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-765524-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-765524-m03 --driver=docker  --container-runtime=crio: (33.659885284s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-765524
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-765524: exit status 80 (356.755351ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-765524 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-765524-m03 already exists in multinode-765524-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-765524-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-765524-m03: (2.113929717s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.27s)

                                                
                                    
x
+
TestPreload (119.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-068301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1209 05:32:25.334727 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-068301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m2.591616498s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-068301 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-068301 image pull gcr.io/k8s-minikube/busybox: (2.241412956s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-068301
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-068301: (5.960529306s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-068301 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-068301 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.006681724s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-068301 image list
helpers_test.go:175: Cleaning up "test-preload-068301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-068301
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-068301: (2.467073684s)
--- PASS: TestPreload (119.51s)

                                                
                                    
x
+
TestScheduledStopUnix (109.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-848288 --memory=3072 --driver=docker  --container-runtime=crio
E1209 05:34:22.262700 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:34:31.980228 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-848288 --memory=3072 --driver=docker  --container-runtime=crio: (34.050502876s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848288 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 05:34:55.327668 1743219 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:34:55.327790 1743219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:34:55.327801 1743219 out.go:374] Setting ErrFile to fd 2...
	I1209 05:34:55.327806 1743219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:34:55.328052 1743219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:34:55.328281 1743219 out.go:368] Setting JSON to false
	I1209 05:34:55.328396 1743219 mustload.go:66] Loading cluster: scheduled-stop-848288
	I1209 05:34:55.328755 1743219 config.go:182] Loaded profile config "scheduled-stop-848288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:34:55.328826 1743219 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/config.json ...
	I1209 05:34:55.329008 1743219 mustload.go:66] Loading cluster: scheduled-stop-848288
	I1209 05:34:55.329131 1743219 config.go:182] Loaded profile config "scheduled-stop-848288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-848288 -n scheduled-stop-848288
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848288 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 05:34:55.801898 1743311 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:34:55.802049 1743311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:34:55.802063 1743311 out.go:374] Setting ErrFile to fd 2...
	I1209 05:34:55.802069 1743311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:34:55.802309 1743311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:34:55.802555 1743311 out.go:368] Setting JSON to false
	I1209 05:34:55.802791 1743311 daemonize_unix.go:73] killing process 1743233 as it is an old scheduled stop
	I1209 05:34:55.802947 1743311 mustload.go:66] Loading cluster: scheduled-stop-848288
	I1209 05:34:55.803350 1743311 config.go:182] Loaded profile config "scheduled-stop-848288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:34:55.803423 1743311 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/config.json ...
	I1209 05:34:55.803587 1743311 mustload.go:66] Loading cluster: scheduled-stop-848288
	I1209 05:34:55.803731 1743311 config.go:182] Loaded profile config "scheduled-stop-848288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1209 05:34:55.814851 1580521 retry.go:31] will retry after 123.884µs: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.816006 1580521 retry.go:31] will retry after 210.193µs: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.817164 1580521 retry.go:31] will retry after 126.816µs: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.818300 1580521 retry.go:31] will retry after 329.382µs: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.819444 1580521 retry.go:31] will retry after 713.36µs: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.820713 1580521 retry.go:31] will retry after 402.575µs: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.821873 1580521 retry.go:31] will retry after 1.287696ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.824140 1580521 retry.go:31] will retry after 1.579673ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.826387 1580521 retry.go:31] will retry after 3.817907ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.830703 1580521 retry.go:31] will retry after 2.748491ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.834002 1580521 retry.go:31] will retry after 5.509326ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.840232 1580521 retry.go:31] will retry after 4.789191ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.845457 1580521 retry.go:31] will retry after 8.973188ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.854675 1580521 retry.go:31] will retry after 21.715834ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
I1209 05:34:55.876908 1580521 retry.go:31] will retry after 38.701173ms: open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848288 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-848288 -n scheduled-stop-848288
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-848288
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-848288 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1209 05:35:21.719734 1743676 out.go:360] Setting OutFile to fd 1 ...
	I1209 05:35:21.719974 1743676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:35:21.720013 1743676 out.go:374] Setting ErrFile to fd 2...
	I1209 05:35:21.720035 1743676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1209 05:35:21.720347 1743676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1577059/.minikube/bin
	I1209 05:35:21.720668 1743676 out.go:368] Setting JSON to false
	I1209 05:35:21.720843 1743676 mustload.go:66] Loading cluster: scheduled-stop-848288
	I1209 05:35:21.721293 1743676 config.go:182] Loaded profile config "scheduled-stop-848288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1209 05:35:21.721428 1743676 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/scheduled-stop-848288/config.json ...
	I1209 05:35:21.721728 1743676 mustload.go:66] Loading cluster: scheduled-stop-848288
	I1209 05:35:21.721918 1743676 config.go:182] Loaded profile config "scheduled-stop-848288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-848288
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-848288: exit status 7 (68.713534ms)

                                                
                                                
-- stdout --
	scheduled-stop-848288
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-848288 -n scheduled-stop-848288
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-848288 -n scheduled-stop-848288: exit status 7 (71.392537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-848288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-848288
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-848288: (4.224207403s)
--- PASS: TestScheduledStopUnix (109.88s)

                                                
                                    
x
+
TestInsufficientStorage (12.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-743953 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-743953 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.263092179s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3f45ecae-9a92-49ee-9d92-2b56f1690d2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-743953] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9dd46cc-ae52-4686-bd72-7b3a9a15964a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22081"}}
	{"specversion":"1.0","id":"f0f33544-773e-496e-a87f-a80dbd5645b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7ae67ffb-ffb0-43ca-ad4e-2576f3fffc81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig"}}
	{"specversion":"1.0","id":"60d2e501-937c-4887-944b-3ed5a2718a03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube"}}
	{"specversion":"1.0","id":"f2374f85-4c14-4f40-af6c-7f834652b91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"27411ea1-7cf7-488c-ae31-18b5c20b46a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ebbd01e0-d3f1-4ff6-aebd-60df1f83512f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"70b45c7e-339e-4b30-aada-a91b019cbf72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5a386b6a-97c8-4e92-add4-93c1817b3829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e547bc2d-ba75-4dba-981c-e7581c1bc0de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"735401e8-b6e7-4427-b261-02d7f7a04a92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-743953\" primary control-plane node in \"insufficient-storage-743953\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf8e3075-ca8f-42af-87d8-84f22421701f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765184860-22066 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9d1caf0-9a2c-41d1-8825-91ec394beb61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c90c47b-23f3-4c19-8f62-70210c2237b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-743953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-743953 --output=json --layout=cluster: exit status 7 (300.370487ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-743953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-743953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:36:21.644045 1745397 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-743953" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-743953 --output=json --layout=cluster
E1209 05:36:21.780958 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-743953 --output=json --layout=cluster: exit status 7 (302.915352ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-743953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-743953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 05:36:21.947470 1745464 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-743953" does not appear in /home/jenkins/minikube-integration/22081-1577059/kubeconfig
	E1209 05:36:21.957426 1745464 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/insufficient-storage-743953/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-743953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-743953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-743953: (2.044743224s)
--- PASS: TestInsufficientStorage (12.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (302.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2286952708 start -p running-upgrade-831739 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1209 05:49:05.338730 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2286952708 start -p running-upgrade-831739 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.17376367s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-831739 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1209 05:49:22.262326 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:49:31.980173 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:51:21.781064 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-831739 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.555451828s)
helpers_test.go:175: Cleaning up "running-upgrade-831739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-831739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-831739: (3.623549523s)
--- PASS: TestRunningBinaryUpgrade (302.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3454181436 start -p missing-upgrade-839923 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3454181436 start -p missing-upgrade-839923 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.254388903s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-839923
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-839923
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-839923 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-839923 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.579691449s)
helpers_test.go:175: Cleaning up "missing-upgrade-839923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-839923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-839923: (2.07057805s)
--- PASS: TestMissingContainerUpgrade (110.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-832858 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-832858 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.941494ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-832858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22081
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22081-1577059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1577059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-832858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-832858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.205479828s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-832858 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-832858 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-832858 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.045819898s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-832858 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-832858 status -o json: exit status 2 (486.031085ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-832858","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-832858
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-832858: (2.468036141s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-832858 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-832858 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.805338607s)
--- PASS: TestNoKubernetes/serial/Start (8.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22081-1577059/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-832858 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-832858 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.902032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-832858
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-832858: (1.286728774s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (93.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-832858 --driver=docker  --container-runtime=crio
E1209 05:37:44.858746 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-832858 --driver=docker  --container-runtime=crio: (1m33.37987252s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (93.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-832858 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-832858 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.066839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1925602574 start -p stopped-upgrade-056039 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1925602574 start -p stopped-upgrade-056039 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.541910332s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1925602574 -p stopped-upgrade-056039 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1925602574 -p stopped-upgrade-056039 stop: (1.286279272s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-056039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1209 05:44:22.262750 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-331811/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:44:31.980713 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:46:21.780509 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/functional-790468/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 05:47:35.065468 1580521 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1577059/.minikube/profiles/addons-377526/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-056039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.107563556s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-056039
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-056039: (1.664834434s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.66s)

                                                
                                    
x
+
TestPause/serial/Start (95.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-360536 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-360536 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m35.011482749s)
--- PASS: TestPause/serial/Start (95.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-360536 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-360536 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.607879608s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.64s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-739882 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-739882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-739882
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:819: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:543: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1093: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard